diff --git a/.obsidian/community-plugins.json b/.obsidian/community-plugins.json index 5737bf04..a2f31287 100644 --- a/.obsidian/community-plugins.json +++ b/.obsidian/community-plugins.json @@ -19,5 +19,18 @@ "extended-graph", "mysnippets-plugin", "obsidian-pandoc-reference-list", - "obsidian-share-as-gist" + "obsidian-share-as-gist", + "obsidian-excalidraw-plugin", + "txt-as-md-obsidian", + "text-snippets-obsidian", + "obsidian-tasks-plugin", + "tag-wrangler", + "obsidian-plugin-toc", + "share-note", + "obsidian-shellcommands", + "obsidian-rollover-daily-todos", + "qmd-as-md-obsidian", + "number-headings-obsidian", + "note-aliases", + "nldates-obsidian" ] \ No newline at end of file diff --git a/.obsidian/plugins/rss-reader/data.json b/.obsidian/plugins/rss-reader/data.json index a74278d7..94881108 100644 --- a/.obsidian/plugins/rss-reader/data.json +++ b/.obsidian/plugins/rss-reader/data.json @@ -66,16 +66,16 @@ "description": "xkcd.com: A webcomic of romance and math humor.", "items": [ { - "title": "The Story Continues: Announcing Version 14 of Wolfram Language and Mathematica", - "description": "\"\"Version 14.0 of Wolfram Language and Mathematica is available immediately both on the desktop and in the cloud. See also more detailed information on Version 13.1, Version 13.2 and Version 13.3. Building Something Greater and Greater… for 35 Years and Counting Today we celebrate a new waypoint on our journey of nearly four decades with […]", - "content": "\"\"

Version 14.0 of Wolfram Language and Mathematica is available immediately both on the desktop and in the cloud. See also more detailed information on Version 13.1, Version 13.2 and Version 13.3.

\n

Building Something Greater and Greater… for 35 Years and Counting

\n

Today we celebrate a new waypoint on our journey of nearly four decades with the release of Version 14.0 of Wolfram Language and Mathematica. Over the two years since we released Version 13.0 we’ve been steadily delivering the fruits of our research and development in .1 releases every six months. Today we’re aggregating these—and more—into Version 14.0.

\n

It’s been more than 35 years now since we released Version 1.0. And all those years we’ve been continuing to build a taller and taller tower of capabilities, progressively expanding the scope of our vision and the breadth of our computational coverage of the world:

\n

Number of built-in fuctions

\n

Version 1.0 had 554 built-in functions; in Version 14.0 there are 6602. And behind each of those functions is a story. Sometimes it’s a story of creating a superalgorithm that encapsulates decades of algorithmic development. Sometimes it’s a story of painstakingly curating data that’s never been assembled before. Sometimes it’s a story of drilling down to the essence of something to invent new approaches and new functions that can capture it.

\n

And from all these pieces we’ve been steadily building the coherent whole that is today’s Wolfram Language. In the arc of intellectual history it defines a broad, new, computational paradigm for formalizing the world. And at a practical level it provides a superpower for implementing computational thinking—and enabling “computational X” for all fields X.

\n

To us it’s profoundly satisfying to see what has been done over the past three decades with everything we’ve built so far. So many discoveries, so many inventions, so much achieved, so much learned. And seeing this helps drive forward our efforts to tackle still more, and to continue to push every boundary we can with our R&D, and to deliver the results in new versions of our system.

\n

Our R&D portfolio is broad. From projects that get completed within months of their conception, to projects that rely on years (and sometimes even decades) of systematic development. And key to everything we do is leveraging what we have already done—often taking what in earlier years was a pinnacle of technical achievement, and now using it as a routine building block to reach a level that could barely even be imagined before. And beyond practical technology, we’re also continually going further and further in leveraging what’s now the vast conceptual framework that we’ve been building all these years—and progressively encapsulating it in the design of the Wolfram Language.

\n

We’ve worked hard all these years not only to create ideas and technology, but also to craft a practical and sustainable ecosystem in which we can systematically do this now and into the long-term future. And we continue to innovate in these areas, broadening the delivery of what we’ve built in new and different ways, and through new and different channels. And in the past five years we’ve also been able to open up our core design process to the world—regularly livestreaming what we’re doing in a uniquely open way.

\n

And indeed over the past several years the seeds of essentially everything we’re delivering today in Version 14.0 has been openly shared with the world, and represents an achievement not only for our internal teams but also for the many people who have participated in and commented on our livestreams.

\n

Part of what Version 14.0 is about is continuing to expand the domain of our computational language, and our computational formalization of the world. But Version 14.0 is also about streamlining and polishing the functionality we’ve already defined. Throughout the system there are things we’ve made more efficient, more robust and more convenient. And, yes, in complex software, bugs of many kinds are a theoretical and practical inevitability. And in Version 14.0 we’ve fixed nearly 10,000 bugs, the majority found by our increasingly sophisticated internal software testing methods.

\n

Now We Need to Tell the World

\n

Even after all the work we’ve put into the Wolfram Language over the past several decades, there’s still yet another challenge: how to let people know just what the Wolfram Language can do. Back when we released Version 1.0 I was able to write a book of manageable size that could pretty much explain the whole system. But for Version 14.0—with all the functionality it contains—one would need a book with perhaps 200,000 pages.

\n

And at this point nobody (even me!) immediately knows everything the Wolfram Language does. Of course one of our great achievements has been to maintain across all that functionality a tightly coherent and consistent design that results in there ultimately being only a small set of fundamental principles to learn. But at the vast scale of the Wolfram Language as it exists today, knowing what’s possible—and what can now be formulated in computational terms—is inevitably very challenging. And all too often when I show people what’s possible, I’ll get the response “I had no idea the Wolfram Language could do that!”

\n

So in the past few years we’ve put increasing emphasis into building large-scale mechanisms to explain the Wolfram Language to people. It begins at a very fine-grained level, with “just-in-time information” provided, for example, through suggestions made when you type. Then for each function (or other construct in the language) there are pages that explain the function, with extensive examples. And now, increasingly, we’re adding “just-in-time learning material” that leverages the concreteness of the functions to provide self-contained explanations of the broader context of what they do.

\n

By the way, in modern times we need to explain the Wolfram Language not just to humans, but also to AIs—and our very extensive documentation and examples have proved extremely valuable in training LLMs to use the Wolfram Language. And for AIs we’re providing a variety of tools—like immediate computable access to documentation, and computable error handling. And with our Chat Notebook technology there’s also a new “on ramp” for creating Wolfram Language code from linguistic (or visual, etc.) input.

\n

But what about the bigger picture of the Wolfram Language? For both people and AIs it’s important to be able to explain things at a higher level, and we’ve been doing more and more in this direction. For more than 30 years we’ve had “guide pages” that summarize specific functionality in particular areas. Now we’re adding “core area pages” that give a broader picture of large areas of functionality—each one in effect covering what might otherwise be a whole product on its own, if it wasn’t just an integrated part of the Wolfram Language:

\n

Core area pages

\n

But we’re going even much further, building whole courses and books that provide modern hands-on Wolfram-Language-enabled introductions to a broad range of areas. We’ve now covered the material of many standard college courses (and quite a lot besides), in a new and very effective “computational” way, that allows immediate, practical engagement with concepts:

\n

Wolfram U courses

\n

All these courses involve not only lectures and notebooks but also auto-graded exercises, as well as official certifications. And we have a regular calendar of everyone-gets-together-at-the-same-time instructor-led peer Study Groups about these courses. And, yes, our Wolfram U operation is now emerging as a significant educational entity, with many thousands of students at any given time.

\n

In addition to whole courses, we have “miniseries” of lectures about specific topics:

\n

Miniseries video lectures

\n

And we also have courses—and books—about the Wolfram Language itself, like my Elementary Introduction to the Wolfram Language, which came out in a third edition this year (and has an associated course, online version, etc.):

\n

Elementary Introduction to the Wolfram Language

\n

In a somewhat different direction, we’ve expanded our Wolfram Summer School to add a Wolfram Winter School, and we’ve greatly expanded our Wolfram High School Summer Research Program, adding year-round programs, middle-school programs, etc.—including the new “Computational Adventures” weekly activity program.

\n

And then there’s livestreaming. We’ve been doing weekly “R&D livestreams” with our development team (and sometimes also external guests). And I myself have also been doing a lot of livestreaming (232 hours of it in 2023 alone)—some of it design reviews of Wolfram Language functionality, and some of it answering questions, technical and other.

\n

The list of ways we’re getting the word out about the Wolfram Language goes on. There’s Wolfram Community, that’s full of interesting contributions, and has ever-increasing readership. There are sites like Wolfram Challenges. There are our Wolfram Technology Conferences. And lots more.

\n

We’ve put immense effort into building the whole Wolfram technology stack over the past four decades. And even as we continue to aggressively build it, we’re putting more and more effort into telling the world about just what’s in it, and helping people (and AIs) to make the most effective use of it. But in a sense, everything we’re doing is just a seed for what the wider community of Wolfram Language users are doing, and can do. Spreading the power of the Wolfram Language to more and more people and areas.

\n

The LLMs Have Landed

\n

The machine learning superfunctions Classify and Predict first appeared in Wolfram Language in 2014 (Version 10). By the next year there were starting to be functions like ImageIdentify and LanguageIdentify, and within a couple of years we’d introduced our whole neural net framework and Neural Net Repository. Included in that were a variety of neural nets for language modeling, that allowed us to build out functions like SpeechRecognize and an experimental version of FindTextualAnswer. But—like everyone else—we were taken by surprise at the end of 2022 by ChatGPT and its remarkable capabilities.

\n

Very quickly we realized that a major new use case—and market—had arrived for Wolfram|Alpha and Wolfram Language. For now it was not only humans who’d need the tools we’d built; it was also AIs. By March 2023 we’d worked with OpenAI to use our Wolfram Cloud technology to deliver a plugin to ChatGPT that allows it to call Wolfram|Alpha and Wolfram Language. LLMs like ChatGPT provide remarkable new capabilities in reproducing human language, basic human thinking and general commonsense knowledge. But—like unaided humans—they’re not set up to deal with detailed computation or precise knowledge. For that, like humans, they have to use formalism and tools. And the remarkable thing is that the formalism and tools we’ve built in Wolfram Language (and Wolfram|Alpha) are basically a broad, perfect fit for what they need.

\n

We created the Wolfram Language to provide a bridge from what humans think about to what computation can express and implement. And now that’s what the AIs can use as well. The Wolfram Language provides a medium not only for humans to “think computationally” but also for AIs to do so. And we’ve been steadily doing the engineering to let AIs call on Wolfram Language as easily as possible.

\n

But in addition to LLMs using Wolfram Language, there’s also now the possibility of Wolfram Language using LLMs. And already in June 2023 (Version 13.3) we released a major collection of LLM-based capabilities in Wolfram Language. One category is LLM functions, that effectively use LLMs as “internal algorithms” for operations in Wolfram Language:

\n
\n
\n

\n

In typical Wolfram Language fashion, we have a symbolic representation for LLMs: LLMConfiguration[] represents an LLM with its various parameters, promptings, etc. And in the past few months we’ve been steadily adding connections to the full range of popular LLMs, making Wolfram Language a unique hub not only for LLM usage, but also for studying the performance—and science—of LLMs.

\n

You can define your own LLM functions in Wolfram Language. But there’s also the Wolfram Prompt Repository that plays a similar role for LLM functions as the Wolfram Function Repository does for ordinary Wolfram Language functions. There’s a public Prompt Repository that so far has several hundred curated prompts. But it’s also possible for anyone to post their prompts in the Wolfram Cloud and make them publicly (or privately) accessible. The prompts can define personas (“talk like a [stereotypical] pirate”). They can define AI-oriented functions (“write it with emoji”). And they can define modifiers that affect the form of output (“haiku style”).

\n

Wolfram Prompt Repository

\n

In addition to calling LLMs “programmatically” within Wolfram Language, there’s the new concept (first introduced in Version 13.3) of “Chat Notebooks”. Chat Notebooks represent a new kind of user interface, that combines the graphical, computational and document features of traditional Wolfram Notebooks with the new linguistic interface capabilities brought to us by LLMs.

\n

The basic idea of a Chat Notebook—as introduced in Version 13.3, and now extended in Version 14.0—is that you can have “chat cells” (requested by typing ) whose content gets sent not to the Wolfram kernel, but instead to an LLM:

\n

Write a haiku about a crocodile on the moon

\n

You can use “function prompts”—say from the Wolfram Prompt Repository—directly in a Chat Notebook:

\n

A cat ate my lunch

\n

And as of Version 14.0 you can also knit Wolfram Language computations directly into your “conversation” with the LLM:

\n

Make a haiku from RandomWord

\n

(You type \\ to insert Wolfram Language, very much like the way you can use <**> to insert Wolfram Language into external evaluation cells.)

\n

One thing about Chat Notebooks is that—as their name suggests—they really are centered around “chatting”, and around having a sequential interaction with an LLM. In an ordinary notebook, it doesn’t matter where in the notebook each Wolfram Language evaluation is requested; all that’s relevant is the order in which the Wolfram kernel does the evaluations. But in a Chat Notebook the “LLM evaluations” are always part of a “chat” that’s explicitly laid out in the notebook.

\n

A key part of Chat Notebooks is the concept of a chat block: type ~ and you get a separator in the notebook that “starts a new chat”:

\n

My name is Stephen

\n

Chat Notebooks—with all their typical Wolfram Notebook editing, structuring, automation, etc. capabilities—are very powerful just as “LLM interfaces”. But there’s another dimension as well, enabled by LLMs being able to call Wolfram Language as a tool.

\n

At one level, Chat Notebooks provide an “on ramp” for using Wolfram Language. Wolfram|Alpha—and even more so, Wolfram|Alpha Notebook Edition—let you ask questions in natural language, then have the questions translated into Wolfram Language, and answers computed. But in Chat Notebooks you can go beyond asking specific questions. Instead, through the LLM, you can just “start chatting” about what you want to do, then have Wolfram Language code generated, and executed:

\n

How do you make a rosette with 5 lobes?

\n

The workflow is typically as follows. First, you have to conceptualize in computational terms what you want. (And, yes, that step requires computational thinking—which is a very important skill that too few people have so far learned.) Then you tell the LLM what you want, and it’ll try to write Wolfram Language code to achieve it. It’ll typically run the code for you (but you can also always do it yourself)—and you can see whether you got what you wanted. But what’s crucial is that Wolfram Language is intended to be read not only by computers but also by humans. And particularly since LLMs actually usually seem to manage to write pretty good Wolfram Language code, you can expect to read what they wrote, and see if it’s what you wanted. If it is, you can take that code, and use it as a “solid building block” for whatever larger system you might be trying to set up. Otherwise, you can either fix it yourself, or try chatting with the LLM to get it to do it.

\n

One of the things we see in the example above is the LLM—within the Chat Notebook—making a “tool call”, here to a Wolfram Language evaluator. In the Wolfram Language there’s now a whole mechanism for defining tools for LLMs—with each tool being represented by an LLMTool symbolic object. In Version 14.0 there’s an experimental version of the new Wolfram LLM Tool Repository with some predefined tools:

\n

Wolfram LLM Tool Repository

\n

In a default Chat Notebook, the LLM has access to some default tools, which include not only the Wolfram Language evaluator, but also things like Wolfram documentation search and Wolfram|Alpha query. And it’s common to see the LLM go back and forth trying to write “code that works”, and for example sometimes having to “resort” (much like humans do) to reading the documentation.

\n

Something that’s new in Version 14.0 is experimental access to multimodal LLMs that can take images as well as text as input. And when this capability is enabled, it allows the LLM to “look at pictures from the code it generated”, see if they’re what was asked for, and potentially correct itself:

\n

Create graphics with a randomly colored disc

\n

The deep integration of images into Wolfram Language—and Wolfram Notebooks—yields all sorts of possibilities for multimodal LLMs. Here we’re giving a plot as an image and asking the LLM how to reproduce it:

\n

Create a similar plot

\n

Another direction for multimodal LLMs is to take data (in the hundreds of formats accepted by Wolfram Language) and use the LLM to guide its visualization and analysis in the Wolfram Language. Here’s an example that starts from a file data.csv in the current directory on your computer:

\n

Look at the file data.csv

\n

One thing that’s very nice about using Wolfram Language directly is that everything you do (well, unless you use RandomInteger, etc.) is completely reproducible; do the same computation twice and you’ll get the same result. That’s not true with LLMs (at least right now). And so when one uses LLMs it feels like something more ephemeral and fleeting than using Wolfram Language. One has to grab any good results one gets—because one might never be able to reproduce them. Yes, it’s very helpful that one can store everything in a Chat Notebook, even if one can’t rerun it and get the same results. But the more “permanent” use of LLM results tends to be “offline”. Use an LLM “up front” to figure something out, then just use the result it gave.

\n

One unexpected application of LLMs for us has been in suggesting names of functions. With the LLM’s “experience” of what people talk about, it’s in a good position to suggest functions that people might find useful. And, yes, when it writes code it has a habit of hallucinating such functions. But in Version 14.0 we’ve actually added one function—DigitSum—that was suggested to us by LLMs. And in a similar vein, we can expect LLMs to be useful in making connections to external databases, functions, etc. The LLM “reads the documentation”, and tries to write Wolfram Language “glue” code—which then can be reviewed, checked, etc., and if it’s right, can be used henceforth.

\n

Then there’s data curation, which is a field that—through Wolfram|Alpha and many of our other efforts—we’ve become extremely expert at over the past couple of decades. How much can LLMs help with that? They certainly don’t “solve the whole problem”, but integrating them with the tools we already have has allowed us over the past year to speed up some of our data curation pipelines by factors of two or more.

\n

If we look at the whole stack of technology and content that’s in the modern Wolfram Language, the overwhelming majority of it isn’t helped by LLMs, and isn’t likely to be. But there are many—sometimes unexpected—corners where LLMs can dramatically improve heuristics or otherwise solve problems. And in Version 14.0 there are starting to be a wide variety of “LLM inside” functions.

\n

An example is TextSummarize, which is a function we’ve considered adding for many versions—but now, thanks to LLMs, can finally implement to a useful level:

\n
\n
\n

\n

The main LLMs that we’re using right now are based on external services. But we’re building capabilities to allow us to run LLMs in local Wolfram Language installations as soon as that’s technically feasible. And one capability that’s actually part of our mainline machine learning effort is NetExternalObject—a way of representing symbolically an externally defined neural net that can be run inside Wolfram Language. NetExternalObject allows you, for example, to take any network in ONNX form and effectively treat it as a component in a Wolfram Language neural net. Here’s a network for image depth estimation—that we’re here importing from an external repository (though in this case there’s actually a similar network already in the Wolfram Neural Net Repository):

\n
\n
\n

\n

Now we can apply this imported network to an image that’s been encoded with our built-in image encoder—then we’re taking the result and visualizing it:

\n
\n
\n

\n

It’s often very convenient to be able to run networks locally, but it can sometimes take quite high-end hardware to do so. For example, there’s now a function in the Wolfram Function Repository that does image synthesis entirely locally—but to run it, you do need a GPU with at least 8 GB of VRAM:

\n
\n
\n

\n

By the way, based on LLM principles (and ideas like transformers) there’ve been other related advances in machine learning that have been strengthening a whole range of Wolfram Language areas—with one example being image segmentation, where ImageSegmentationComponents now provides robust “content-sensitive” segmentation:

\n
\n
\n

\n

Still Going Strong on Calculus

\n

When Mathematica 1.0 was released in 1988, it was a “wow” that, yes, now one could routinely do integrals symbolically by computer. And it wasn’t long before we got to the point—first with indefinite integrals, and later with definite integrals—where what’s now the Wolfram Language could do integrals better than any human. So did that mean we were “finished” with calculus? Well, no. First there were differential equations, and partial differential equations. And it took a decade to get symbolic ODEs to a beyond-human level. And with symbolic PDEs it took until just a few years ago. Somewhere along the way we built out discrete calculus, asymptotic expansions and integral transforms. And we also implemented lots of specific features needed for applications like statistics, probability, signal processing and control theory. But even now there are still frontiers.

\n

And in Version 14 there are significant advances around calculus. One category concerns the structure of answers. Yes, one can have a formula that correctly represents the solution to a differential equation. But is it in the best, simplest or most useful form? Well, in Version 14 we’ve worked hard to make sure it is—often dramatically reducing the size of expressions that get generated.

\n

Another advance has to do with expanding the range of “pre-packaged” calculus operations. We’ve been able to do derivatives ever since Version 1.0. But in Version 14 we’ve added implicit differentiation. And, yes, one can give a basic definition for this easily enough using ordinary differentiation and equation solving. But by adding an explicit ImplicitD we’re packaging all that up—and handling the tricky corner cases—so that it becomes routine to use implicit differentiation wherever you want:

\n
\n
\n

\n

Another category of pre-packaged calculus operations new in Version 14 are ones for vector-based integration. These were always possible to do in a “do-it-yourself” mode. But in Version 14 they are now streamlined built-in functions—that, by the way, also cover corner cases, etc. And what made them possible is actually a development in another area: our decade-long project to add geometric computation to Wolfram Language—which gave us a natural way to describe geometric constructs such as curves and surfaces:

\n
\n
\n

\n
\n
\n

\n

Related functionality new in Version 14 is ContourIntegrate:

\n
\n
\n

\n

Functions like ContourIntegrate just “get the answer”. But if one’s learning or exploring calculus it’s often also useful to be able to do things in a more step-by-step way. In Version 14 you can start with an inactive integral

\n
\n
\n

\n

and explicitly do operations like changing variables:

\n
\n
\n

\n

Sometimes actual answers get expressed in inactive form, particularly as infinite sums:

\n
\n
\n

\n

And now in Version 14 the function TruncateSum lets you take such a sum and generate a truncated “approximation”:

\n
\n
\n

\n

Functions like D and Integrate—as well as LineIntegrate and SurfaceIntegrate—are, in a sense, “classic calculus”, taught and used for more than three centuries. But in Version 14 we also support what we can think of as “emerging” calculus operations, like fractional differentiation:

\n
\n
\n

\n

Core Language

\n

What are the primitives from which we can best build our conception of computation? That’s at some level the question I’ve been asking for more than four decades, and what’s determined the functions and structures at the core of the Wolfram Language.

\n

And as the years go by, and we see more and more of what’s possible, we recognize and invent new primitives that will be useful. And, yes, the world—and the ways people interact with computers—change too, opening up new possibilities and bringing new understanding of things. Oh, and this year there are LLMs which can “get the intellectual sense of the world” and suggest new functions that can fit into the framework we’ve created with the Wolfram Language. (And, by the way, there’ve also been lots of great suggestions made by the audiences of our design review livestreams.)

\n

One new construct added in Version 13.1—and that I personally have found very useful—is Threaded. When a function is listable—as Plus is—the top levels of lists get combined:

\n
\n
\n

\n

But sometimes you want one list to be “threaded into” the other at the lowest level, not the highest. And now there’s a way to specify that, using Threaded:

\n
\n
\n

\n

In a sense, Threaded is part of a new wave of symbolic constructs that have “ambient effects” on lists. One very simple example (introduced in 2015) is Nothing:

\n
\n
\n

\n

Another, introduced in 2020, is Splice:

\n
\n
\n

\n

An old chestnut of Wolfram Language design concerns the way infinite evaluation loops are handled. And in Version 13.2 we introduced the symbolic construct TerminatedEvaluation to provide better definition of how out-of-control evaluations have been terminated:

\n
\n
\n

\n

In a curious connection, in the computational representation of physics in our recent Physics Project, the direct analog of nonterminating evaluations are what make possible the seemingly unending universe in which we live.

\n

But what is actually going on “inside an evaluation”, terminating or not? I’ve always wanted a good representation of this. And in fact back in Version 2.0 we introduced Trace for this purpose:

\n
\n
\n

\n

But just how much detail of what the evaluator does should one show? Back in Version 2.0 we introduced the option TraceOriginal that traces every path followed by the evaluator:

\n
\n
\n

\n

But often this is way too much. And in Version 14.0 we’ve introduced the new setting TraceOriginalAutomatic, which doesn’t include in its output evaluations that don’t do anything:

\n
\n
\n

\n

This may seem pedantic, but when one has an expression of any substantial size, it’s a crucial piece of pruning. So, for example, here’s a graphical representation of a simple arithmetic evaluation, with TraceOriginalTrue:

\n
\n
\n

\n

And here’s the corresponding “pruned” version, with TraceOriginalAutomatic:

\n
\n
\n

\n

(And, yes, the structures of these graphs are closely related to things like the causal graphs we construct in our Physics Project.)

\n

In the effort to add computational primitives to the Wolfram Language, two new entrants in Version 14.0 are Comap and ComapApply. The function Map takes a function f and “maps it” over a list:

\n
\n
\n

\n

Comap does the “mathematically co-” version of this, taking a list of functions and “comapping” them onto a single argument:

\n
\n
\n

\n

Why is this useful? As an example, one might want to apply three different statistical functions to a single list. And now it’s easy to do that, using Comap:

\n
\n
\n

\n

By the way, as with Map, there’s also an operator form for Comap:

\n
\n
\n

\n

Comap works well when the functions it’s dealing with take just one argument. If one has functions that take multiple arguments, ComapApply is what one typically wants:

\n
\n
\n

\n

Talking of “co-like” functions, a new function added in Version 13.2 is PositionSmallest. Min gives the smallest element in a list; PositionSmallest instead says where the smallest elements are:

\n
\n
\n

\n

One of the important objectives in the Wolfram Language is to have as much as possible “just work”. When we released Version 1.0 strings could be assumed just to contain ordinary ASCII characters, or perhaps to have an external character encoding defined. And, yes, it could be messy not to know “within the string itself” what characters were supposed to be there. And by the time of Version 3.0 in 1996 we’d become contributors to, and early adopters of, Unicode, which provided a standard encoding for “16-bits’-worth” of characters. And for many years this served us well. But in time—and particularly with the growth of emoji—16 bits wasn’t enough to encode all the characters people wanted to use. So a few years ago we began rolling out support for 32-bit Unicode, and in Version 13.1 we integrated it into notebooks—in effect making strings something much richer than before:

\n
\n
\n

\n

And, yes, you can use Unicode everywhere now:

\n
\n
\n

\n

Video as a Fundamental Object

\n

Back when Version 1.0 was released, a megabyte was a lot of memory. But 35 years later we routinely deal with gigabytes. And one of the things that makes practical is computation with video. We first introduced Video experimentally in Version 12.1 in 2020. And over the past three years we’ve been systematically broadening and strengthening our ability to deal with video in Wolfram Language. Probably the single most important advance is that things around video now—as much as possible—“just work”, without “creaking” under the strain of handling such large amounts of data.

\n

We can directly capture video into notebooks, and we can robustly play video anywhere within a notebook. We’ve also added options for where to store the video so that it’s conveniently accessible to you and anyone else you want to give access to it.

\n

There’s lots of complexity in the encoding of video—and we now robustly and transparently support more than 500 codecs. We also do lots of convenient things automatically, like rotating portrait-mode videos—and being able to apply image processing operations like ImageCrop across whole videos. In every version, we’ve been further optimizing the speed of some video operation or another.

\n

But a particularly big focus has been on video generators: programmatic ways to produce videos and animations. One basic example is AnimationVideo, which produces the same kind of output as Animate, but as a Video object that can either be displayed directly in a notebook, or exported in MP4 or some other format:

\n

AnimationVideo

\n

AnimationVideo is based on computing each frame in a video by evaluating an expression. Another class of video generators take an existing visual construct, and simply “tour” it. TourVideo “tours” images, graphics and geo graphics; Tour3DVideo (new in Version 14.0) tours 3D geometry:

\n
\n
\n

\n

A very powerful capability in Wolfram Language is being able to apply arbitrary functions to videos. One example of how this can be done is VideoFrameMap, which maps a function across frames of a video, and which was made efficient in Version 13.2:

\n
\n
\n

\n

And although Wolfram Language isn’t intended as an interactive video editing system, we’ve made sure that it’s possible to do streamlined programmatic video editing in the language, and for example in Version 14.0 we’ve added things like transition effects in VideoJoin and timed overlays in OverlayVideo.

\n

So Much Got Faster, Stronger, Sleeker

\n

With every new version of Wolfram Language we add new capabilities to extend yet further the domain of the language. But we also put a lot of effort into something less immediately visible: making existing capabilities faster, stronger and sleeker.

\n

And in Version 14 two areas where we can see some examples of all these are dates and quantities. We introduced the notion of symbolic dates (DateObject, etc.) nearly a decade ago. And over the years since then we’ve built many things on this structure. And in the process of doing this it’s become clear that there are certain flows and paths that are particularly common and convenient. At the beginning what mattered most was just to make sure that the relevant functionality existed. But over time we’ve been able to see what should be streamlined and optimized, and we’ve steadily been doing that.

\n

In addition, as we’ve worked towards new and different applications, we’ve seen “corners” that need to be filled in. So, for example, astronomy is an area we’ve significantly developed in Version 14, and supporting astronomy has required adding several new “high-precision” time capabilities, such as the TimeSystem option, as well as new astronomy-oriented calendar systems. Another example concerns date arithmetic. What should happen if you want to add a month to January 30? Where should you land? Different kinds of business applications and contracts make different assumptions—and so we added a Method option to functions like DatePlus to handle this. Meanwhile, having realized that date arithmetic is involved in the “inner loop” of certain computations, we optimized it—achieving a more than 100x speedup in Version 14.0.

\n

Wolfram|Alpha has been able to deal with units ever since it was first launched in 2009—now more than 10,000 of them. And in 2012 we introduced Quantity to represent quantities with units in the Wolfram Language. And over the past decade we’ve been steadily smoothing out a whole series of complicated gotchas and issues with units. For example, what does 100°C + 20°C mean? Well, the 20°C isn’t really the same kind of thing as the 100°C. And now in Wolfram Language we have a systematic way to handle this, by distinguishing temperature and temperature difference units—so that we now write 100°C + .

\n

At first our priority with Quantity was to get it working as broadly as possible, and to integrate it as widely as possible into computations, visualizations, etc. across the system. But as its capabilities have expanded, so have its uses, repeatedly driving the need to optimize its operation for particular common cases. And indeed between Version 13 and Version 14 we’ve dramatically sped up many things related to Quantity, often by factors of 1000 or more.

\n

Talking of speedups, another example—made possible by new algorithms operating on multithreaded CPUs—concerns polynomials. We’ve worked with polynomials in Wolfram Language since Version 1, but in Version 13.2 there was a dramatic speedup of up to 1000x on operations like polynomial factoring.

\n

In addition, a new algorithm in Version 14.0 dramatically speeds up numerical solutions to polynomial and transcendental equations—and, together with the new MaxRoots options, allows us, for example, to pick off a few roots from a degree-one-million polynomial

\n
\n
\n

\n

or to find roots of a transcendental equation that we could not even attempt before without pre-specifying bounds on their values:

\n
\n
\n

\n

Another “old” piece of functionality with recent enhancement concerns mathematical functions. Ever since Version 1.0 we’ve set up mathematical functions so that they can be computed to arbitrary precision:

\n
\n
\n

\n

But in recent versions we’ve wanted to be “more precise about precision”, and to be able to rigorously compute just what range of outputs are possible given the range of values provided as input:

\n
\n
\n

\n

But every function for which we do this effectively requires a new theorem, and we’ve been steadily increasing the number of functions covered—now more than 130—so that this “just works” when you need to use it in a computation.

\n

The Tree Story Continues

\n

Trees are useful. We first introduced them as basic objects in the Wolfram Language only in Version 12.3. But now that they’re there, we’re discovering more and more places they can be used. And to support that, we’ve been adding more and more capabilities to them.

\n

One area that’s advanced significantly since Version 13 is the rendering of trees. We tightened up the general graphic design, but, more importantly, we introduced many new options for how rendering should be done.

\n

For example, here’s a random tree where we’ve specified that for all nodes only 3 children should be explicitly displayed: the others are elided away:

\n
\n
\n

\n

Here we’re adding several options to define the rendering of the tree:

\n
\n
\n

\n

By default, the branches in trees are labeled with integers, just like parts in an expression. But in Version 13.1 we added support for named branches defined by associations:

\n
\n
\n

\n

Our original conception of trees was very centered around having elements one would explicitly address, and that could have “payloads” attached. But what became clear is that there were applications where all that mattered was the structure of the tree, not anything about its elements. So we added UnlabeledTree to create “pure trees”:

\n
\n
\n

\n

Trees are useful because many kinds of structures are basically trees. And since Version 13 we’ve added capabilities for converting trees to and from various kinds of structures. For example, here’s a simple Dataset object:

\n
\n
\n

\n

You can use ExpressionTree to convert this to a tree:

\n
\n
\n

\n

And TreeExpression to convert it back:

\n
\n
\n

\n

We’ve also added capabilities for converting to and from JSON and XML, as well as for representing file directory structures as trees:

\n
\n
\n

\n

Finite Fields

\n

In Version 1.0 we had integers, rational numbers and real numbers. In Version 3.0 we added algebraic numbers (represented implicitly by Root)—and a dozen years later we added algebraic number fields and transcendental roots. For Version 14 we’ve now added another (long-awaited) “number-related” construct: finite fields.

\n

Here’s our symbolic representation of the field of integers modulo 7:

\n
\n
\n

\n

And now here’s a specific element of that field

\n
\n
\n

\n

which we can immediately compute with:

\n
\n
\n

\n

But what’s really important about what we’ve done with finite fields is that we’ve fully integrated them into other functions in the system. So, for example, we can factor a polynomial whose coefficients are in a finite field:

\n
\n
\n

\n

We can also do things like find solutions to equations over finite fields. So here, for example, is a point on a Fermat curve over the finite field GF(173):

\n
\n
\n

\n

And here is a power of a matrix with elements over the same finite field:

\n
\n
\n

\n

Going Off Planet: The Astro Story

\n

A major new capability added since Version 13 is astro computation. It begins with being able to compute to high precision the positions of things like planets. Even knowing what one means by “position” is complicated, though—with lots of different coordinate systems to deal with. By default AstroPosition gives the position in the sky at the current time from your Here location:

\n
\n
\n

\n

But one can instead ask about a different coordinate system, like global galactic coordinates:

\n
\n
\n

\n

And now here’s a plot of the distance between Saturn and Jupiter over a 50-year period:

\n
\n
\n

\n

In direct analogy to GeoGraphics, we’ve added AstroGraphics, here showing a patch of sky around the current position of Saturn:

\n
\n
\n

\n

And this now shows the sequence of positions for Saturn over the course of a couple of years—yes, including retrograde motion:

\n
\n
\n

\n

There are many styling options for AstroGraphics. Here we’re adding a background of the “galactic sky”:

\n
\n
\n

\n

And here we’re including renderings for constellations (and, yes, we had an artist draw them):

\n
\n
\n

\n

Something specifically new in Version 14.0 has to do with extended handling of solar eclipses. We always try to deliver new functionality as fast as we can. But in this case there was a very specific deadline: the total solar eclipse visible from the US on April 8, 2024. We’ve had the ability to do global computations about solar eclipses for some time (actually since soon before the 2017 eclipse). But now we can also do detailed local computations right in the Wolfram Language.

\n

So, for example, here’s a somewhat detailed overall map of the April 8, 2024, eclipse:

\n
\n
\n

\n

Now here’s a plot of the magnitude of the eclipse over a few hours, complete with a little “rampart” associated with the period of totality:

\n
\n
\n

\n

And here’s a map of the region of totality every minute just after the moment of maximum eclipse:

\n
\n
\n

\n

Millions of Species Become Computable

\n

We first introduced computable data on biological organisms back when Wolfram|Alpha was released in 2009. But in Version 14—following several years of work—we’ve dramatically broadened and deepened the computable data we have about biological organisms.

\n

So for example here’s how we can figure out what species have cheetahs as predators:

\n
\n
\n

\n

And here are pictures of these:

\n
\n
\n

\n

Here’s a map of countries where cheetahs have been seen (in the wild):

\n
\n
\n

\n

We now have data—curated from a great many sources—on more than a million species of animals, as well as most of the plants, fungi, bacteria, viruses and archaea that have been described. And for animals, for example, we have nearly 200 properties that are extensively filled in. Some are taxonomic properties:

\n
\n
\n

\n

Some are physical properties:

\n
\n
\n

\n
\n
\n

\n

Some are genetic properties:

\n
\n
\n

\n

Some are ecological properties (yes, the cheetah is not the apex predator):

\n
\n
\n

\n

It’s useful to be able to get properties of individual species, but the real power of our curated computable data shows up when one does larger-scale analyses. Like here’s a plot of the lengths of genomes for organisms with the longest ones across our collection of organisms:

\n
\n
\n

\n

Or here’s a histogram of the genome lengths for organisms in the human gut microbiome:

\n
\n
\n

\n

And here’s a scatterplot of the lifespans of birds against their weights:

\n
\n
\n

\n

Following the idea that cheetahs aren’t apex predators, this is a graph of what’s “above” them in the food chain:

\n
\n
\n

\n

Chemical Computation

\n

We began the process of introducing chemical computation into the Wolfram Language in Version 12.0, and by Version 13 we had good coverage of atoms, molecules, bonds and functional groups. Now in Version 14 we’ve added coverage of chemical formulas, amounts of chemicals—and chemical reactions.

\n

Here’s a chemical formula, that basically just gives a “count of atoms”:

\n
\n
\n

\n

Now here are specific molecules with that formula:

\n
\n
\n

\n

Let’s pick one of these molecules:

\n
\n
\n

\n

Now in Version 14 we have a way to represent a certain quantity of molecules of a given type—here 1 gram of methylcyclopentane:

\n
\n
\n

\n

ChemicalConvert can convert to a different specification of quantity, here moles:

\n
\n
\n

\n

And here a count of molecules:

\n
\n
\n

\n

But now the bigger story is that in Version 14 we can represent not just individual types of molecules, and quantities of molecules, but also chemical reactions. Here we give a “sloppy” unbalanced representation of a reaction, and ReactionBalance gives us the balanced version:

\n
\n
\n

\n

And now we can extract the formulas for the reactants:

\n
\n
\n

\n

We can also give a chemical reaction in terms of molecules:

\n
\n
\n

\n

But with our symbolic representation of molecules and reactions, there’s now a big thing we can do: represent classes of reactions as “pattern reactions”, and work with them using the same kinds of concepts as we use in working with patterns for general expressions. So, for example, here’s a symbolic representation of the hydrohalogenation reaction:

\n
\n
\n

\n

Now we can apply this pattern reaction to particular molecules:

\n
\n
\n

\n

Here’s a more elaborate example, in this case entered using a SMARTS string:

\n
\n
\n

\n

Here we’re applying the reaction just once:

\n
\n
\n

\n

And now we’re doing it repeatedly

\n
\n
\n

\n

in this case generating longer and longer molecules (which in this case happen to be polypeptides):

\n
\n
\n

The Knowledgebase Is Always Growing

\n

Every minute of every day, new data is being added to the Wolfram Knowledgebase. Much of it is coming automatically from real-time feeds. But we also have a very large-scale ongoing curation effort with humans in the loop. We’ve built sophisticated (Wolfram Language) automation for our data curation pipeline over the years—and this year we’ve been able to increase efficiency in some areas by using LLM technology. But it’s hard to do curation right, and our long-term experience is that to do so ultimately requires human experts being in the loop, which we have.

\n

So what’s new since Version 13.0? 291,842 new notable current and historical people; 264,467 music works; 118,538 music albums; 104,024 named stars; and so on. Sometimes the addition of an entity is driven by the new availability of reliable data; often it’s driven by the need to use that entity in some other piece of functionality (e.g. stars to render in AstroGraphics). But more than just adding entities there’s the issue of filling in values of properties of existing entities. And here again we’re always making progress, sometimes integrating newly available large-scale secondary data sources, and sometimes doing direct curation ourselves from primary sources.

\n

A recent example where we needed to do direct curation was in data on alcoholic beverages. We have very extensive data on hundreds of thousands of types of foods and drinks. But none of our large-scale sources included data on alcoholic beverages. So that’s an area where we need to go to primary sources (in this case typically the original producers of products) and curate everything for ourselves.

\n

So, for example, we can now ask for something like the distribution of flavors of different varieties of vodka (actually, personally, not being a consumer of such things, I had no idea vodka even had flavors…):

\n
\n
\n

\n

But beyond filling out entities and properties of existing types, we’ve also steadily been adding new entity types. One recent example is geological formations, 13,706 of them:

\n
\n
\n

\n

So now, for example, we can specify where T. rex have been found

\n
\n
\n

\n

and we can show those regions on a map:

\n
\n
\n

\n

Industrial-Strength Multidomain PDEs

\n

PDEs are hard. It’s hard to solve them. And it’s hard to even specify what exactly you want to solve. But we’ve been on a multi-decade mission to “consumerize” PDEs and make them easier to work with. Many things go into this. You need to be able to easily specify elaborate geometries. You need to be able to easily define mathematically complicated boundary conditions. You need to have a streamlined way to set up the complicated equations that come out of underlying physics. Then you have to—as automatically as possible—do the sophisticated numerical analysis to efficiently solve the equations. But that’s not all. You also often need to visualize your solution, compute other things from it, or run optimizations of parameters over it.

\n

It’s a deep use of what we’ve built with Wolfram Language—touching many parts of the system. And the result is something unique: a truly streamlined and integrated way to handle PDEs. One’s not dealing with some (usually very expensive) “just for PDEs” package; what we now have is a “consumerized” way to handle PDEs whenever they’re needed—for engineering, science, or whatever. And, yes, being able to connect machine learning, or image computation, or curated data, or data science, or real-time sensor feeds, or parallel computing, or, for that matter, Wolfram Notebooks, to PDEs just makes them so much more valuable.

\n

We’ve had “basic, raw NDSolve” since 1991. But what’s taken decades to build is all the structure around that to let one conveniently set up—and efficiently solve—real-world PDEs, and connect them into everything else. It’s taken developing a whole tower of underlying algorithmic capabilities such as our more-flexible-and-integrated-than-ever-before industrial-strength computational geometry and finite element methods. But beyond that it’s taken creating a language for specifying real-world PDEs. And here the symbolic nature of the Wolfram Language—and our whole design framework—has made possible something very unique, that has allowed us to dramatically simplify and consumerize the use of PDEs.

\n

It’s all about providing symbolic “construction kits” for PDEs and their boundary conditions. We started this about five years ago, progressively covering more and more application areas. In Version 14 we’ve particularly focused on solid mechanics, fluid mechanics, electromagnetics and (one-particle) quantum mechanics.

\n

Here’s an example from solid mechanics. First, we define the variables we’re dealing with (displacement and underlying coordinates):

\n
\n
\n

\n

Next, we specify the parameters we want to use to describe the solid material we’re going to work with:

\n
\n
\n

\n

Now we can actually set up our PDE—using symbolic PDE specifications like SolidMechanicsPDEComponent—here for the deformation of a solid object pulled on one side:

\n
\n
\n

\n

And, yes, “underneath”, these simple symbolic specifications turn into a complicated “raw” PDE:

\n
\n
\n

\n

Now we are ready to actually solve our PDE in a particular region, i.e. for an object with a particular shape:

\n
\n
\n

\n

And now we can visualize the result, which shows how our object stretches when it’s pulled on:

\n
\n
\n

\n

The way we’ve set things up, the material for our object is an idealization of something like rubber. But in the Wolfram Language we now have ways to specify all sorts of detailed properties of materials. So, for example, we can add reinforcement as a unit vector in a particular direction (say in practice with fibers) to our material:

\n
\n
\n

\n

Then we can rerun what we did before

\n
\n
\n

\n
\n
\n

\n

but now we get a slightly different result:

\n
\n
\n

\n

Another major PDE domain that’s new in Version 14.0 is fluid flow. Let’s do a 2D example. Our variables are 2D velocity and pressure:

\n
\n
\n

\n

Now we can set up our fluid system in a particular region, with no-slip conditions on all walls except at the top where we assume fluid is flowing from left to right. The only parameter needed is the Reynolds number. And instead of just solving our PDEs for a single Reynolds number, let’s create a parametric solver that can take any specified Reynolds number:

\n
\n
\n

\n

Now here’s the result for Reynolds number 100:

\n
\n
\n

\n

But with the way we’ve set things up, we can as well generate a whole video as a function of Reynolds number (and, yes, the Parallelize speeds things up by generating different frames in parallel):

\n
\n

\n
\n

\n

Much of our work in PDEs involves catering to the complexities of real-world engineering situations. But in Version 14.0 we’re also adding features to support “pure physics”, and in particular to support quantum mechanics done with the Schrödinger equation. So here, for example, is the 2D 1-particle Schrödinger equation (with ):

\n
\n
\n

\n

Here’s the region we’re going to be solving over—showing explicit discretization:

\n
\n
\n

\n

Now we can solve the equation, adding in some boundary conditions:

\n
\n
\n

\n

And now we get to visualize a Gaussian wave packet scattering around a barrier:

\n
\n
\n

\n

Streamlining Systems Engineering Computation

\n

Systems engineering is a big field, but it’s one where the structure and capabilities of the Wolfram Language provide unique advantages—that over the past decade have allowed us to build out rather complete industrial-strength support for modeling, analysis and control design for a wide range of types of systems. It’s all an integrated part of the Wolfram Language, accessible through the computational and interface structure of the language. But it’s also integrated with our separate Wolfram System Modeler product, that provides a GUI-based workflow for system modeling and exploration.

\n

Shared with System Modeler are large collections of domain-specific modeling libraries. And, for example, since Version 13, we’ve added libraries in areas such as battery engineering, hydraulic engineering and aircraft engineering—as well as educational libraries for mechanical engineering, thermal engineering, digital electronics, and biology. (We’ve also added libraries for areas such as business and public policy simulation.)

\n

Domain-specific modeling libraries

\n

A typical workflow for systems engineering begins with the setting up of a model. The model can be built from scratch, or assembled from components in model libraries—either visually in Wolfram System Modeler, or programmatically in the Wolfram Language. For example, here’s a model of an electric motor that’s turning a load through a flexible shaft:

\n
\n
\n

\n

Once one’s got a model, one can then simulate it. Here’s an example where we’ve set one parameter of our model (the moment of inertia of the load), and we’re computing the values of two others as a function of time:

\n
\n
\n

\n

A new capability in Version 14.0 is being able to see the effect of uncertainty in parameters (or initial values, etc.) on the behavior of a system. So here, as an example, we’re saying the value of the parameter is not definite, but is instead distributed according to a normal distribution—then we’re seeing the distribution of output results:

\n
\n
\n

\n

The motor with flexible shaft that we’re looking at can be thought of as a “multidomain system”, combining electrical and mechanical components. But the Wolfram Language (and Wolfram System Modeler) can also handle “mixed systems”, combining analog and digital (i.e. continuous and discrete) components. Here’s a fairly sophisticated example from the world of control systems: a helicopter model connected in a closed loop to a digital control system:

\n

Helicopter model

\n

This whole model system can be represented symbolically just by:

\n
\n
\n

\n

And now we compute the input-output response of the model:

\n
\n
\n

\n

Here’s specifically the output response:

\n
\n
\n

\n

But now we can “drill in” and see specific subsystem responses, here of the zero-order hold device (labeled ZOH above)—complete with its little digital steps:

\n
\n
\n

\n

But what if we want to design the control systems ourselves? Well, in Version 14 we can now apply all our Wolfram Language control systems design functionality to arbitrary system models. Here’s an example of a simple model, in this case in chemical engineering (a continuously stirred tank):

\n
\n
\n

\n

Now we can take this model and design an LQG controller for it—then assemble a whole closed-loop system for it:

\n
\n
\n

\n

Now we can simulate the closed-loop system—and see that the controller succeeds in bringing the final value to 0:

\n
\n
\n

\n

Graphics: More Beautiful & Alive

\n

Graphics have always been an important part of the story of the Wolfram Language, and for more than three decades we’ve been progressively enhancing and updating their appearance and functionality—sometimes with help from advances in hardware (e.g. GPU) capabilities.

\n

Since Version 13 we’ve added a variety of “decorative” (or “annotative”) effects in 2D graphics. One example (useful for putting captions on things) is Haloing:

\n
\n
\n

\n

Another example is DropShadowing:

\n
\n
\n

\n

All of these are specified symbolically, and can be used throughout the system (e.g. in hover effects, etc). And, yes, there are many detailed parameters you can set:

\n
\n
\n

\n

A significant new capability in Version 14.0 is convenient texture mapping. We’ve had low-level polygon-by-polygon textures for a decade and a half. But now in Version 14.0 we’ve made it straightforward to map textures onto whole surfaces. Here’s an example wrapping a texture onto a sphere:

\n
\n
\n

\n

And here’s wrapping the same texture onto a more complicated surface:

\n
\n
\n

\n

A significant subtlety is that there are many ways to map what amount to “texture coordinate patches” onto surfaces. The documentation illustrates new, named cases:

\n

Texture coordinate patches

\n

And now here’s what happens with stereographic projection onto a sphere:

\n
\n
\n

\n

Here’s an example of “surface texture” for the planet Venus

\n
\n
\n

\n

and here it’s been mapped onto a sphere, which can be rotated:

\n
\n
\n

\n

Here’s a “flowerified” bunny:

\n
\n
\n

\n

Things like texture mapping help make graphics visually compelling. Since Version 13 we’ve also added a variety of “live visualization” capabilities that automatically “bring visualizations to life”. For example, any plot now by default has a “coordinate mouseover”:

\n
\n
\n

\n

As usual, there’s lots of ways to control such “highlighting” effects:

\n
\n
\n

\n
\n
\n

\n

Euclid Redux: The Advance of Synthetic Geometry

\n

One might say it’s been two thousand years in the making. But four years ago (Version 12) we began to introduce a computable version of Euclid-style synthetic geometry.

\n

The idea is to specify geometric scenes symbolically by giving a collection of (potentially implicit) constraints:

\n
\n
\n

\n

We can then generate a random instance of geometry consistent with the constraints—and in Version 14 we’ve considerably enhanced our ability to make sure that geometry will be “typical” and non-degenerate:

\n
\n
\n

\n

But now a new feature of Version 14 is that we can find values of geometric quantities that are determined by the constraints:

\n
\n
\n

\n

Here’s a slightly more complicated case:

\n
\n
\n

\n

And here we’re now solving for the areas of two triangles in the figure:

\n
\n
\n

\n

We’ve always been able to give explicit styles for particular elements of a scene:

\n
\n
\n

\n

Now one of the new features in Version 14 is being able to give general “geometric styling rules”, here just assigning random colors to each element:

\n
\n
\n

\n

The Ever-Smoother User Interface

\n

Our goal with Wolfram Language is to make it as easy as possible to express oneself computationally. And a big part of achieving that is the coherent design of the language itself. But there’s another part as well, which is being able to actually enter Wolfram Language input one wants—say in a notebook—as easily as possible. And with every new version we make enhancements to this.

\n

One area that’s been in continuous development is interactive syntax highlighting. We first added syntax highlighting nearly two decades ago—and over time we’ve progressively made it more and more sophisticated, responding both as you type, and as code gets executed. Some highlighting has always had obvious meaning. But particularly highlighting that is dynamic and based on cursor position has sometimes been harder to interpret. And in Version 14—leveraging the brighter color palettes that have become the norm in recent years—we’ve tuned our dynamic highlighting so it’s easier to quickly tell “where you are” within the structure of an expression:

\n

Dynamic highlighting

\n

On the subject of “knowing what one has”, another enhancement—added in Version 13.2—is differentiated frame coloring for different kinds of visual objects in notebooks. Is that thing one has a graphic? Or an image? Or a graph? Now one can tell from the color of frame when one selects it:

\n

Differentiated frame coloring

\n

An important aspect of the Wolfram Language is that the names of built-in functions are spelled out enough that it’s easy to tell what they do. But often the names are therefore necessarily quite long, and so it’s important to be able to autocomplete them when one’s typing. In 13.3 we added the notion of “fuzzy autocompletion” that not only “completes to the end” a name one’s typing, but also can fill in intermediate letters, change capitalization, etc. Thus, for example, just typing lll brings up an autocompletion menu that begins with ListLogLogPlot:

\n

Autocompletion menu

\n

A major user interface update that first appeared in Version 13.1—and has been enhanced in subsequent versions—is a default toolbar for every notebook:

\n

Default toolbar

\n

The toolbar provides immediate access to evaluation controls, cell formatting and various kinds of input (like inline cells, , hyperlinks, drawing canvas, etc.)—as well as to things like Menu options cloud publishing, Menu options documentation search and Menu options “chat” (i.e. LLM) settings.

\n

Much of the time, it’s useful to have the toolbar displayed in any notebook you’re working with. But on the left-hand side there’s a little tiny that lets you minimize the toolbar:

\n

Minimize toolbar

\n

In 14.0 there’s a Preferences setting that makes the toolbar come up minimized in any new notebook you create—and this in effect gives you the best of both worlds: you have immediate access to the toolbar, but your notebooks don’t have anything “extra” that might distract from their content.

\n

Another thing that’s advanced since Version 13 is the handling of “summary” forms of output in notebooks. A basic example is what happens if you generate a very large result. By default only a summary of the result is actually displayed. But now there’s a bar at the bottom that gives various options for how to handle the actual output:

\n
\n
\n

\n

By default, the output is only stored in your current kernel session. But by pressing the Iconize button you get an iconized form that will appear directly in your notebook (or one that can be copied anywhere) and that “has the whole output inside”. There’s also a Store full expression in notebook button, which will “invisibly” store the output expression “behind” the summary display.

\n

If the expression is stored in the notebook, then it’ll be persistent across kernel sessions. Otherwise, well, you won’t be able to get to it in a different kernel session; the only thing you’ll have is the summary display:

\n

Summary display

\n

It’s a similar story for large “computational objects”. Like here’s a Nearest function with a million data points:

\n
\n
\n

\n

By default, the data is just something that exists in your current kernel session. But now there’s a menu that lets you save the data in various persistent locations:

\n

Save data menu

\n

And There’s the Cloud Too

\n

There are many ways to run the Wolfram Language. Even in Version 1.0 we had the notion of remote kernels: the notebook front end running on one machine (in those days essentially always a Mac, or a NeXT), and the kernel running on a different machine (in those days sometimes even connected by phone lines). But a decade ago came a major step forward: the Wolfram Cloud.

\n

There are really two distinct ways in which the cloud is used. The first is in delivering a notebook experience similar to our longtime desktop experience, but running purely in a browser. And the second is in delivering APIs and other programmatically accessed capabilities—notably, even at the beginning, a decade ago, through things like APIFunction.

\n

The Wolfram Cloud has been the target of intense development now for nearly 15 years. Alongside it have also come Wolfram Application Server and Wolfram Web Engine, which provide more streamlined support specifically for APIs (without things like user management, etc., but with things like clustering).

\n

All of these—but particularly the Wolfram Cloud—have become core technology capabilities for us, supporting many of our other activities. So, for example, the Wolfram Function Repository and Wolfram Paclet Repository are both based on the Wolfram Cloud (and in fact this is true of our whole resource system). And when we came to build the Wolfram plugin for ChatGPT earlier this year, using the Wolfram Cloud allowed us to have the plugin deployed within a matter of days.

\n

Since Version 13 there have been quite a few very different applications of the Wolfram Cloud. One is for the function ARPublish, which takes 3D geometry and puts it in the Wolfram Cloud with appropriate metadata to allow phones to get augmented-reality versions from a QR code of a cloud URL:

\n
\n
\n

\n

Augmented-reality triptych

\n

On the Cloud Notebook side, there’s been a steady increase in usage, notably of embedded Cloud Notebooks, which have for example become common on Wolfram Community, and are used all over the Wolfram Demonstrations Project. Our goal all along has been to make Cloud Notebooks be as easy to use as simple webpages, but to have the depth of capabilities that we’ve developed in notebooks over the past 35 years. We achieved this some years ago for fairly small notebooks, but in the past couple of years we’ve been going progressively further in handling even multi-hundred-megabyte notebooks. It’s a complicated story of caching, refreshing—and dodging the vicissitudes of web browsers. But at this point the vast majority of notebooks can be seamlessly deployed to the cloud, and will display as immediately as simple webpages.

\n

The Great Integration Story for External Code

\n

It’s been possible to call external code from Wolfram Language ever since Version 1.0. But in Version 14 there are important advances in the extent and ease with which external code can be integrated. The overall goal is to be able to use all the power and coherence of the Wolfram Language even when some part of a computation is done in external code. And in Version 14 we’ve done a lot to streamline and automate the process by which external code can be integrated into the language.

\n

Once something is integrated into the Wolfram Language it just becomes, for example, a function that can be used just like any other Wolfram Language function. But what’s underneath is necessarily quite different for different kinds of external code. There’s one setup for interpreted languages like Python. There’s another for C-like compiled languages and dynamic libraries. (And then there are others for external processes, APIs, and what amount to “importable code specifications”, say for neural networks.)

\n

Let’s start with Python. We’ve had ExternalEvaluate for evaluating Python code since 2018. But when you actually come to use Python there are all these dependencies and libraries to deal with. And, yes, that’s one of the places where the incredible advantages of the Wolfram Language and its coherent design are painfully evident. But in Version 14.0 we now have a way to encapsulate all that Python complexity, so that we can deliver Python functionality within Wolfram Language, hiding all the messiness of Python dependencies, and even the versioning of Python itself.

\n

As an example, let’s say we want to make a Wolfram Language function Emojize that uses the Python function emojize within the emoji Python library. Here’s how we can do that:

\n
\n
\n

\n

And now you can just call Emojize in the Wolfram Language and—under the hood—it’ll run Python code:

\n
\n
\n

\n

The way this works is that the first time you call Emojize, a Python environment with all the right features is created, then is cached for subsequent uses. And what’s important is that the Wolfram Language specification of Emojize is completely system independent (or as system independent as it can be, given vicissitudes of Python implementations). So that means that you can, for example, deploy Emojize in the Wolfram Function Repository just like you would deploy something written purely in Wolfram Language.

\n

There’s very different engineering involved in calling C-compatible functions in dynamic libraries. But in Version 13.3 we also made this very streamlined using the function ForeignFunctionLoad. There’s all sorts of complexity associated with converting to and from native C data types, managing memory for data structures, etc. But we’ve now got very clean ways to do this in Wolfram Language.

\n

As an example, here’s how one sets up a “foreign function” call to a function RAND_bytes in the OpenSSL library:

\n
\n
\n

\n

Inside this, we’re using Wolfram Language compiler technology to specify the native C types that will be used in the foreign function. But now we can package this all up into a Wolfram Language function:

\n
\n
\n

\n

And we can call this function just like any other Wolfram Language function:

\n
\n
\n

\n

Internally, all sorts of complicated things are going on. For example, we’re allocating a raw memory buffer that’s then getting fed to our C function. But when we do that memory allocation we’re creating a symbolic structure that defines it as a “managed object”:

\n
\n
\n

\n

And now when this object is no longer being used, the memory associated with it will be automatically freed.

\n

And, yes, with both Python and C there’s quite a bit of complexity underneath. But the good news is that in Version 14 we’ve basically been able to automate handling it. And the result is that what gets exposed is pure, simple Wolfram Language.

\n

But there’s another big piece to this. Within particular Python or C libraries there are often elaborate definitions of data structures that are specific to that library. And so to use these libraries one has to dive into all the—potentially idiosyncratic—complexities of those definitions. But in the Wolfram Language we have consistent symbolic representations for things, whether they’re images, or dates or types of chemicals. When you first hook up an external library you have to map its data structures to these. But once that’s done, anyone can use what’s been built, and seamlessly integrate with other things they’re doing, perhaps even calling other external code. In effect what’s happening is that one’s leveraging the whole design framework of the Wolfram Language, and applying that even when one’s using underlying implementations that aren’t based on the Wolfram Language.

\n

For Serious Developers

\n

A single line (or less) of Wolfram Language code can do a lot. But one of the remarkable things about the language is that it’s fundamentally scalable: good both for very short programs and very long programs. And since Version 13 there’ve been several advances in handling very long programs. One of them concerns “code editing”.

\n

Standard Wolfram Notebooks work very well for exploratory, expository and many other forms of work. And it’s certainly possible to write large amounts of code in standard notebooks (and, for example, I personally do it). But when one’s doing “software-engineering-style work” it’s both more convenient and more familiar to use what amounts to a pure code editor, largely separate from code execution and exposition. And this is why we have the “package editor”, accessible from File > New > Package/Script. You’re still operating in the notebook environment, with all its sophisticated capabilities. But things have been “skinned” to provide a much more textual “code experience”—both in terms of editing, and in terms of what actually gets saved in .wl files.

\n

Here’s typical example of the package editor in action (in this case applied to our GitLink package):

\n

Package editor

\n

Several things are immediately evident. First, it’s very line oriented. Lines (of code) are numbered, and don’t break except at explicit newlines. There are headings just like in ordinary notebooks, but when the file is saved, they’re stored as comments with a certain stylized structure:

\n

Lines of code

\n

It’s still perfectly possible to run code in the package editor, but the output won’t get saved in the .wl file:

\n

Unsaved output

\n

One thing that’s changed since Version 13 is that the toolbar is much enhanced. And for example there’s now “smart search” that is aware of code structure:

\n

Smart search

\n

You can also ask to go to a line number—and you’ll immediately see whatever lines of code are nearby:

\n

Nearby lines of code

\n

In addition to code editing, another set of features new since Version 13 of importance to serious developers concern automated testing. The main advance is the introduction of a fully symbolic testing framework, in which individual tests are represented as symbolic objects

\n
\n
\n

\n

and can be manipulated in symbolic form, then run using functions like TestEvaluate and TestReport:

\n
\n
\n

\n

In Version 14.0 there’s another new testing function—IntermediateTest—that lets you insert what amount to checkpoints inside larger tests:

\n
\n
\n

\n

Evaluating this test, we see that the intermediate tests were also run:

\n
\n
\n

\n

Wolfram Function Repository: 2900 Functions & Counting

\n

The Wolfram Function Repository has been a big success. We introduced it in 2019 as a way to make specific, individual contributed functions available in the Wolfram Language. And now there are more than 2900 such functions in the Repository.

\n

The nearly 7000 functions that constitute the Wolfram Language as it is today have been painstakingly developed over the past three and a half decades, always mindful of creating a coherent whole with consistent design principles. And now in a sense the success of the Function Repository is one of the dividends of all that effort. Because it’s the coherence and consistency of the underlying language and its design principles that make it feasible to just add one function at a time, and have it really work. You want to add a function to do some very specific operation that combines images and graphs. Well, there’s a consistent representation of both images and graphs in the Wolfram Language, which you can leverage. And by following the principles of the Wolfram Language—like for the naming of functions—you can create a function that’ll be easy for Wolfram Language users to understand and use.

\n

Using the Wolfram Function Repository is a remarkably seamless process. If you know the function’s name, you can just call it using ResourceFunction; the function will be loaded if it’s needed, and then it’ll just run:

\n
\n
\n

\n

If there’s an update available for the function, it’ll give you a message, but run the old version anyway. The message has a button that lets you load in the update; then you can rerun your input and use the new version. (If you’re writing code where you want to “burn in” a particular version of a function, you can just use the ResourceVersion option of ResourceFunction.)

\n

If you want your code to look more elegant, just evaluate the ResourceFunction object

\n
\n
\n

\n

and use the formatted version:

\n
\n
\n

\n

And, by the way, pressing the + then gives you more information about the function:

\n

Function information

\n

An important feature of functions in the Function Repository is that they all have documentation pages—that are organized pretty much like the pages for built-in functions:

\n

SolarEclipseIcon function page

\n

But how does one create a Function Repository entry? Just go to File > New > Repository Item > Function Repository Item and you’ll get a Definition Notebook:

\n

Definition notebook

\n

We’ve optimized this to be as easy to fill in as possible, minimizing boilerplate and automatically checking for correctness and consistency whenever possible. And the result is that it’s perfectly realistic to create a simple Function Repository item in under an hour—with the main time spent being in the writing of good expository examples.

\n

When you press Submit to Repository your function gets sent to the Wolfram Function Repository review team, whose mandate is to ensure that functions in the repository do what they say they do, work in a way that is consistent with general Wolfram Language design principles, have good names, and are adequately documented. Except for very specialized functions, the goal is to finish reviews within a week (and sometimes considerably sooner)—and to publish functions as soon as they are ready.

\n

There’s a digest of new (and updated) functions in the Function Repository that gets sent out every Friday—and makes for interesting reading (you can subscribe here):

\n

Wolfram Function Repository email

\n

The Wolfram Function Repository is a curated public resource that can be accessed from any Wolfram Language system (and, by the way, the source code for every function is available—just press the Source Notebook button). But there’s another important use case for the infrastructure of the Function Repository: privately deployed “resource functions”.

\n

It all works through the Wolfram Cloud. You use the exact same Definition Notebook, but now instead of submitting to the public Wolfram Function Repository, you just deploy your function to the Wolfram Cloud. You can make it private so that only you, or some specific group, can access it. Or you can make it public, so anyone who knows its URL can immediately access and use it in their Wolfram Language system.

\n

This turns out to be a tremendously useful mechanism, both for group projects, and for creating published material. In a sense it’s a very lightweight but robust way to distribute code—packaged into functions that can immediately be used. (By the way, to find the functions you’ve published from your Wolfram Cloud account, just go to the DeployedResources folder in the cloud file browser.)

\n

(For organizations that want to manage their own function repository, it’s worth mentioning that the whole Wolfram Function Repository mechanism—including the infrastructure for doing reviews, etc.—is also available in a private form through the Wolfram Enterprise Private Cloud.)

\n

So what’s in the public Wolfram Function Repository? There are a lot of “specialty functions” intended for specific “niche” purposes—but very useful if they’re what you want:

\n
\n
\n

\n
\n
\n

\n
\n
\n

\n
\n
\n

\n
\n
\n

\n
\n
\n

\n

There are functions that add various kinds of visualizations:

\n
\n
\n

\n
\n
\n

\n
\n
\n

\n
\n
\n

\n
\n
\n

\n
\n
\n

\n
\n
\n

\n

Some functions set up user interfaces:

\n
\n
\n

\n

Some functions link to external services:

\n
\n
\n

\n
\n
\n

\n
\n
\n

\n
\n
\n

\n

Some functions provide simple utilities:

\n
\n
\n

\n
\n
\n

\n
\n
\n

\n
\n
\n

\n

There are also functions that are being explored for potential inclusion in the core system:

\n
\n
\n

\n
\n
\n

\n
\n
\n

\n

There are also lots of “leading-edge” functions, added as part of research or exploratory development. And for example in pieces I write (including this one), I make a point of having all pictures and other output be backed by “click-to-copy” code that reproduces them—and this code quite often contains functions either from the public Wolfram Function Repository or from (publicly accessible) private deployments.

\n

The Paclet Repository Arrives

\n

Paclets are a technology we’ve used for more than a decade and a half to distribute updated functionality to Wolfram Language systems in the field. In Version 13 we began the process of providing tools for anyone to create paclets. And since Version 13 we’ve introduced the Wolfram Language Paclet Repository as a centralized repository for paclets:

\n

Wolfram Paclet Repository

\n

What is a paclet? It’s a collection of Wolfram Language functionality—including function definitions, documentation, external libraries, stylesheets, palettes and more—that can be distributed as a unit, and immediately deployed in any Wolfram Language system.

\n

The Paclet Repository is a centralized place where anyone can publish paclets for public distribution. So how does this relate to the Wolfram Function Repository? They are interestingly complementary—with different optimization and different setups. The Function Repository is more lightweight, the Paclet Repository more flexible. The Function Repository is for making available individual new functions, that independently fit into the whole existing structure of the Wolfram Language. The Paclet Repository is for making available larger-scale pieces of functionality, that can define a whole framework and environment of their own.

\n

The Function Repository is also fully curated, with every function being reviewed by our team before it is posted. The Paclet Repository is an immediate-deployment system, without pre-publication review. In the Function Repository every function is specified just by its name—and our review team is responsible for ensuring that names are well chosen and have no conflicts. In the Paclet Repository, every contributor gets their own namespace, and all their functions and other material live inside that namespace. So, for example, I contributed the function RandomHypergraph to the Function Repository, which can be accessed just as ResourceFunction[\"RandomHypergraph\"]. But if I had put this function in a paclet in the Paclet Repository, it would have to be accessed as something like PacletSymbol[\"StephenWolfram/Hypergraphs\", \"RandomHypergraph\"].

\n

PacletSymbol, by the way, is a convenient way of “deep accessing” individual functions inside a paclet. PacletSymbol temporarily installs (and loads) a paclet so that you can access a particular symbol in it. But more often one wants to permanently install a paclet (using PacletInstall), then explicitly load its contents (using Needs) whenever one wants to have its symbols available. (All the various ancillary elements, like documentation, stylesheets, etc. in a paclet get set up when it is installed.)

\n

What does a paclet look like in the Paclet Repository? Every paclet has a home page that typically includes an overall summary, a guide to the functions in the paclet, and some overall examples of the paclet:

\n

ProteinVisualization page

\n

Individual functions typically have their own documentation pages:

\n

AmidePlanePlot page

\n

Just like in the main Wolfram Language documentation, there can be a whole hierarchy of guide pages, and there can be things like tutorials.

\n

Notice that in examples in paclet documentation, one often sees constructs like . These represent symbols in the paclet, presented in forms like PacletSymbol[\"WolframChemistry/ProteinVisualization\", \"AmidePlanePlot\"] that allow these symbols to be accessed in a “standalone” way. If you directly evaluate such a form, by the way, it’ll force (temporary) installation of the paclet, then return the actual, raw symbol that appears in the paclet:

\n
\n
\n

\n

So how does one create a paclet suitable for submission to the Paclet Repository? You can do it purely programmatically, or you can start from File > New > Repository Item > Paclet Repository Item, which launches what amounts to a whole paclet creation IDE. The first step is to specify where you want to assemble your paclet. You give some basic information

\n

Submit paclet information

\n

then a Paclet Resource Definition Notebook is created, from which you can give function definitions, set up documentation pages, specify what you want your paclet’s home page to be like, etc.:

\n

Paclet Resource Definition Notebook

\n

There are lots of sophisticated tools that let you create full-featured paclets with the same kind of breadth and depth of capabilities that you find in the Wolfram Language itself. For example, Documentation Tools lets you construct full-featured documentation pages (function pages, guide pages, tutorials, …):

\n

Documentation Tools

\n

Once you’ve assembled a paclet, you can check it, build it, deploy it privately—or submit it to the Paclet Repository. And once you submit it, it will automatically get set up on the Paclet Repository servers, and within just a few minutes the pages you’ve created describing your paclet will show up on the Paclet Repository website.

\n

So what’s in the Paclet Repository so far? There’s a lot of good and very serious stuff, contributed both by teams at our company and by members of the broader Wolfram Language community. In fact, many of the 134 paclets now in the Paclet Repository have enough in them that there’s a whole piece like this that one could write about them.

\n

One category of things you’ll find in the Paclet Repository are snapshots of our ongoing internal development projects—many of which will eventually become built-in parts of the Wolfram Language. A good example of this is our LLM and Chat Notebook functionality, whose rapid development and deployment over the past year was made possible by the use of the Paclet Repository. Another example, representing ongoing work from our chemistry team (AKA WolframChemistry in the Paclet Repository) is the ChemistryFunctions paclet, which contains functions like:

\n
\n
\n

\n

And, yes, this is interactive:

\n
\n
\n

\n

Or, also from WolframChemistry:

\n
\n
\n

\n

Another “development snapshot” is DiffTools—a paclet for making and viewing diffs between strings, cells, notebooks, etc.:

\n
\n
\n

\n

A major paclet is QuantumFramework—which provides the functionality for our Wolfram Quantum Framework

\n

Wolfram Quantum Framework

\n

and delivers broad support for quantum computing (with at least a few connections to multiway systems and our Physics Project):

\n
\n
\n

\n
\n
\n

\n

Talking of our Physics Project, there are over 200 functions supporting it that are in the Wolfram Function Repository. But there are also paclets, like WolframInstitute/Hypergraph:

\n
\n
\n

\n

An example of an externally contributed package is Automata—with more than 250 functions for doing computations related to finite automata:

\n
\n
\n

\n
\n
\n

\n

Another contributed paclet is FunctionalParsers, which goes from a symbolic parser specification to an actual parser, here being used in a reverse mode to generate random “sentences”:

\n
\n
\n

\n

Phi4Tools is a more specialized paclet, for working with Feynman diagrams in field theory:

\n
\n
\n

\n
\n
\n

\n
\n
\n

\n

And, as another example, here’s MaXrd, for crystallography and x-ray scattering:

\n
\n
\n

\n
\n
\n

\n

As just one more example, there’s the Organizer paclet—a utility paclet for making and manipulating organizer notebooks. But unlike the other paclets we’ve seen here, it doesn’t expose any Wolfram Language functions; instead, when you install it, it puts a palette in your Palettes list:

\n
\n
\n

\n

Organizer

\n

Coming Attractions

\n

As of today, Version 14 is finished, and out in the world. So what’s next? We have lots of projects underway—some already with years of development behind them. Some extend and strengthen what’s already in the Wolfram Language; some take it in new directions.

\n

One major focus is broadening and streamlining the deployment of the language: unifying the way it’s delivered and installed on computers, packaging it so it can be efficiently integrated into other standalone applications, etc.

\n

Another major focus is expanding the handling of very large amounts of data by the Wolfram Language—and seamlessly integrating out-of-core and lazy processing.

\n

Then of course there’s algorithmic development. Some is “classical”, directly building on the towers of functionality we’ve developed over the decades. Some is more “AI based”. We’ve been creating heuristic algorithms and meta-algorithms ever since Version 1.0—increasingly using methods from machine learning. How far will neural net methods go? We don’t know yet. We’re routinely using them in things like algorithm selection. But to what extent can they help in the heart of algorithms?

\n

I’m reminded of something we did back in 1987 in developing Version 1.0. There was a long tradition in numerical analysis of painstakingly deriving series approximations for particular cases of mathematical functions. But we wanted to be able to compute hundreds of different functions to arbitrary precision for any complex values of their arguments. So how did we do it? We generalized from series to rational approximations—and then, in a very “machine-learning-esque” way—we spent months of CPU time systematically optimizing these approximations. Well, we’ve been trying to do the same kind of thing again—though now over more ambitious domains—and now using not rational functions but large neural nets as our basis.

\n

We’ve also been exploring using neural nets to “control” precise algorithms, in effect making heuristic choices which either guide or can be validated by the precise algorithms. So far, none of what we’ve produced has outperformed our existing methods, but it seems plausible that fairly soon it will.

\n

We’re doing a lot with various aspects of metaprogramming. There’s the project of
\ngetting LLMs to help in the construction of Wolfram Language code—and in giving comments on it, and in analyzing what went wrong if the code didn’t do what one expected. Then there’s code annotation—where LLMs may help in doing things like predicting the most likely type for something. And there’s code compilation. We’ve been working for many years on a full-scale compiler for the Wolfram Language, and in every version what we have becomes progressively more capable. We’ve been doing some level of automatic compilation in particular cases (particularly ones involving numerical computation) for more than 30 years. And eventually full-scale automatic compilation will be possible for everything. But as of now some of the biggest payoffs from our compiler technology have been for our internal development, where we can now get optimal down-to-the-metal performance simply by compiled (albeit carefully written) Wolfram Language code.

\n

One of the big lessons of the surprising success of LLMs is that there’s potentially more structure in meaningful human language than we thought. I’ve long been interested in creating what I’ve called a “symbolic discourse language” that gives a computational representation of everyday discourse. The LLMs haven’t explicitly done that. But they encourage the idea that it should be possible, and they also provide practical help in doing it. And whether the goal is to be able to represent narrative text, or contracts, or textual specifications, it’s a matter of extending the computational language we’ve built to encompass more kinds of concepts and structures.

\n

There are typically several kinds of drivers for our continued development efforts. Sometimes it’s a question of continuing to build a tower of capabilities in some known direction (like, for example, solving PDEs). Sometimes the tower we’ve built suddenly lets us see new possibilities. Sometimes when we actually use what we’ve built we realize there’s an obvious way to polish or extend it—or to “double down” on something that we can now see is valuable. And then there are cases where things happening in the technology world suddenly open up new possibilities—like LLMs have recently done, and perhaps XR will eventually do. And finally there are cases where new science-related insights suggest new directions.

\n

I had assumed that our Physics Project would at best have practical applications only centuries hence. But in fact it’s become clear that the correspondence it’s defined between physics and computation gives us quite immediate new ways to think about aspects of practical computation. And indeed we’re now actively exploring how to use this to define a new level of parallel and distributed computation in the Wolfram Language, as well as to represent symbolically not only the results of computations but also the ongoing process of computation.

\n

One might think that after nearly four decades of intense development there wouldn’t be anything left to do in developing the Wolfram Language. But in fact at every level we reach, there’s ever more that becomes possible, and ever more that can we see might be possible. And indeed this moment is a particularly fertile one, with an unprecedentedly broad waterfront of possibilities. Version 14 is an important and satisfying waypoint. But there are wonderful things ahead—as we continue our long-term mission to make the computational paradigm achieve its potential, and to build our computational language to help that happen.

\n

\n

\n\n

\nDownload your 14 now! »  (It’s already live in the Wolfram Cloud!)\n
\n", - "category": "Big Picture", - "link": "https://writings.stephenwolfram.com/2024/01/the-story-continues-announcing-version-14-of-wolfram-language-and-mathematica/", + "title": "What If We Had Bigger Brains? Imagining Minds beyond Ours", + "description": "\"\"Cats Don’t Talk We humans have perhaps 100 billion neurons in our brains. But what if we had many more? Or what if the AIs we built effectively had many more? What kinds of things might then become possible? At 100 billion neurons, we know, for example, that compositional language of the kind we humans […]", + "content": "\"\"

Cats Don’t Talk

\n

We humans have perhaps 100 billion neurons in our brains. But what if we had many more? Or what if the AIs we built effectively had many more? What kinds of things might then become possible? At 100 billion neurons, we know, for example, that compositional language of the kind we humans use is possible. At the 100 million or so neurons of a cat, it doesn’t seem to be. But what would become possible with 100 trillion neurons? And is it even something we could imagine understanding?

\n

My purpose here is to start exploring such questions, informed by what we’ve seen in recent years in neural nets and LLMs, as well as by what we now know about the fundamental nature of computation, and about neuroscience and the operation of actual brains (like the one that’s writing this, imaged here):

\n

What If We Had Bigger Brains? Imagining Minds beyond Ours

\n

One suggestive point is that as artificial neural nets have gotten bigger, they seem to have successively passed a sequence of thresholds in capability:

\n
\n
\n

\n

So what’s next? No doubt there’ll be things like humanoid robotic control that have close analogs in what we humans already do. But what if we go far beyond the ~1014 connections that our human brains have? What qualitatively new kinds of capabilities might there then be?

\n

If this was about “computation in general” then there wouldn’t really be much to talk about. The Principle of Computational Equivalence implies that beyond some low threshold computational systems can generically produce behavior that corresponds to computation that’s as sophisticated as it can ever be. And indeed that’s the kind of thing we see both in lots of abstract settings, and in the natural world.

\n

But the point here is that we’re not dealing with “computation in general”. We’re dealing with the kinds of computations that brains fundamentally do. And the essence of these seems to have to do with taking in large amounts of sensory data and then coming up with what amount to decisions about what to do next.

\n

It’s not obvious that there’d be any reasonable way to do this. The world at large is full of computational irreducibility—where the only general way to work out what will happen in a system is just to run the underlying rules for that system step by step and see what comes out:

\n
\n
\n

\n

And, yes, there are plenty of questions and issues for which there’s essentially no choice but to do this irreducible computation—just as there are plenty of cases where LLMs need to call on our Wolfram Language computation system to get computations done. But brains, for the things most important to them, somehow seem to routinely manage to “jump ahead” without in effect simulating every detail. And what makes this possible is the fundamental fact that within any system that shows overall computational irreducibility there must inevitably be an infinite number of “pockets of computational reducibility”, in effect associated with “simplifying features” of the behavior of the system.

\n

It’s these “pockets of reducibility” that brains exploit to be able to successfully “navigate” the world for their purposes in spite of its “background” of computational irreducibility. And in these terms things like the progress of science (and technology) can basically be thought of as the identification of progressively more pockets of computational reducibility. And we can then imagine that the capabilities of bigger brains could revolve around being able to “hold in mind” more of these pockets of computational reducibility.

\n

We can think of brains as fundamentally serving to “compress” the complexity of the world, and extract from it just certain features—associated with pockets of reducibility—that we care about. And for us a key manifestation of this is the idea of concepts, and of language that uses them. At the level of raw sensory input we might see many detailed images of some category of thing—but language lets us describe them all just in terms of one particular symbolic concept (say “rock”).

\n

In a rough first approximation, we can imagine that there’s a direct correspondence between concepts and words in our language. And it’s then notable that human languages all tend to have perhaps 30,000 common words (or word-like constructs). So is that scale the result of the size of our brains? And could bigger brains perhaps deal with many more words, say millions or more?

\n

“What could all those words be about?” we might ask. After all, our everyday experience makes it seem like our current 30,000 words are quite sufficient to describe the world as it is. But in some sense this is circular: we’ve invented the words we have because they’re what we need to describe the aspects of the world we care about, and want to talk about. There will always be more features of, say, the natural world that we could talk about. It’s just that we haven’t chosen to engage with them. (For example, we could perfectly well invent words for all the detailed patterns of clouds in the sky, but those patterns are not something we currently feel the need to talk in detail about.)

\n

But given our current set of words or concepts, is there “closure” to it? Can we successfully operate in a “self-consistent slice of concept space” or will we always find ourselves needing new concepts? We might think of new concepts as being associated with intellectual progress that we choose to pursue or not. But insofar as the “operation of the world” is computationally irreducible it’s basically inevitable that we’ll eventually be confronted with things that cannot be described by our current concepts.

\n

So why is it that the number of concepts (or words) isn’t just always increasing? A fundamental reason is abstraction. Abstraction takes collections of potentially large numbers of specific things (“tiger”, “lion”, …) and allows them to be described “abstractly” in terms of a more general thing (say, “big cats”). And abstraction is useful if it’s possible to make collective statements about those general things (“all big cats have…”), in effect providing a consistent “higher-level” way of thinking about things.

\n

If we imagine concepts as being associated with particular pockets of reducibility, the phenomenon of abstraction is then a reflection of the existence of networks of these pockets. And, yes, such networks can themselves show computational irreducibility, which can then have its own pockets of reducibility, etc.

\n

So what about (artificial) neural nets? It’s routine to “look inside” these, and for example see the possible patterns of activation at a given layer based on a range of possible (“real-world”) inputs. We can then think of these patterns of activation as forming points in a “feature space”. And typically we’ll be able to see clusters of these points, which we can potentially identify as “emergent concepts” that we can view as having been “discovered” by the neural net (or rather, its training). Normally there won’t be existing words in human languages that correspond to most of these concepts. They represent pockets of reducibility, but not ones that we’ve identified, and that are captured by our typical 30,000 or so words. And, yes, even in today’s neural nets, there can easily be millions of “emergent concepts”.

\n

But will these be useful abstractions or concepts, or merely “incidental examples of compression” not connected to anything else? The construction of neural nets implies that a pattern of “emergent concepts” at one layer will necessarily feed into the next layer. But the question is really whether the concept can somehow be useful “independently”—not just at this particular place in the neural net.

\n

And indeed the most obvious everyday use for words and concepts—and language in general—is for communication: for “transferring thoughts” from one mind to another. Within a brain (or a neural net) there are all kinds of complicated patterns of activity, different in each brain (or each neural net). But a fundamental role that concepts, words and language play is to define a way to “package up” certain features of that activity in a form that can be robustly transported between minds, somehow inducing “comparable thoughts” in all of them.

\n

The transfer from one mind to another can never be precise: in going from the pattern of activity in one brain (or neural net) to the pattern of activity in another, there’ll always be translation involved. But—at least up to a point—one can expect that the “more that’s said” the more faithful a translation can be.

\n

But what if there’s a bigger brain, with more “emergent concepts” inside? Then to communicate about them at a certain level of precision we might need to use more words—if not a fundamentally richer form of language. And, yes, while dogs seem to understand isolated words (“sit”, “fetch”, …), we, with our larger brains, can deal with compositional language in which we can in effect construct an infinite range of meanings by combining words into phrases, sentences, etc.

\n

At least as we currently imagine it, language defines a certain model of the world, based on some finite collection of primitives (words, concepts, etc.). The existence of computational irreducibility tells us that such a model can never be complete. Instead, the model has to “approximate things” based on the “network of pockets of reducibility” that the primitives in the language effectively define. And insofar as a bigger brain might in essence be able to make use of a larger network of pockets of reducibility, it can then potentially support a more precise model of the world.

\n

And it could then be that if we look at such a brain and what it does, it will inevitably seem closer to the kind of “incomprehensible and irreducible computation” that’s characteristic of so many abstract systems, and systems in nature. But it could also be that in being a “brain-like construct” it’d necessarily tap into computational reducibility in such a way that—with the formalism and abstraction we’ve built—we’d still meaningfully be able to talk about what it can do.

\n

At the outset we might have thought any attempt for us to “understand minds beyond ours” would be like asking a cat to understand algebra. But somehow the universality of the concepts of computation that we now know—with their ability to address the deepest foundations of physics and other fields—makes it seem more plausible we might now be in a position to meaningfully discuss minds beyond ours. Or at least to discuss the rather more concrete question of what brains like ours, but bigger than ours, might be able to do.

\n

How Brains Seem to Work

\n

As we’ve mentioned, at least in a rough approximation, the role of brains is to turn large amounts of sensory input into small numbers of decisions about what to do. But how does this happen?

\n

Human brains continually receive input from a few million “sensors”, mostly associated with photoreceptors in our eyes and touch receptors in our skin. This input is processed by a total of about 100 billion neurons, each responding in a few milliseconds, and mostly organized into a handful of layers. There are altogether perhaps 100 trillion connections between neurons, many quite long range. At any given moment, a few percent of neurons (i.e. perhaps a billion) are firing. But in the end, all that activity seems to feed into particular structures in the lower part of the brain that in effect “take a majority vote” a few times a second to determine what to do next—in particular with the few hundred “actuators” our bodies have.

\n

This basic picture seems to be more or less the same in all higher animals. The total number of neurons scales roughly with the number of “input sensors” (or, in a first approximation, the surface area of the animal—i.e. volume2/3—which determines the number of touch sensors). The fraction of brain volume that consists of connections (“white matter”) as opposed to main parts of neurons (“gray matter”) increases as a power of the number of neurons. The largest brains—like ours—have a roughly nested pattern of folds that presumably reduce average connection lengths. Different parts of our brains have characteristic functions (e.g. motor control, handling input from our eyes, generation of language, etc.), although there seems to be enough universality that other parts can usually learn to take over if necessary. And in terms of overall performance, animals with smaller brains generally seem to react more quickly to stimuli.

\n

So what was it that made brains originally arise in biological evolution? Perhaps it had to do with giving animals a way to decide where to go next as they moved around. (Plants, which don’t move around, don’t have brains.) And perhaps it’s because animals can’t “go in more than one direction at once” that brains seem to have the fundamental feature of generating a single stream of decisions. And, yes, this is probably why we have a single thread of “conscious experience”, rather than a whole collection of experiences associated with the activities of all our neurons. And no doubt it’s also what we leverage in the construction of language—and in communicating through a one-dimensional sequence of tokens.

\n

It’s notable how similar our description of brains is to the basic operation of large language models: an LLM processes input from its “context window” by feeding it through large numbers of artificial neurons organized in layers—ultimately taking something like a majority vote to decide what token to generate next. There are differences, however, most notably that whereas brains routinely intersperse learning and thinking, current LLMs separate training from operation, in effect “learning first” and “thinking later”.

\n

But almost certainly the core capabilities of both brains and neural nets don’t depend much on the details of their biological or architectural structure. It matters that there are many inputs and few outputs. It matters that there’s irreducible computation inside. It matters that the systems are trained on the world as it is. And, finally, it matters how “big” they are, in effect relative to the “number of relevant features of the world”.

\n

In artificial neural nets, and presumably also in brains, memory is encoded in the
\nstrengths (or “weights”) of connections between neurons. And at least in neural nets it seems that the number of tokens (of textual data) that can reasonably be “remembered” is a few times the number of weights. (With current methods, the number of computational operations of training needed to achieve this is roughly the product of the total number of weights and the total number of tokens.) If there are too few weights, what happens is that the “memory” gets fuzzy, with details of the fuzziness reflecting details of the structure of the network.

\n

But what’s crucial—for both neural nets and brains—is not so much to remember specifics of training data, but rather to just “do something reasonable” for a wide range of inputs, regardless of whether they’re in the training data. Or, in other words, to generalize appropriately from training data.

\n

But what is “appropriate generalization”? As a practical matter, it tends to be “generalization that aligns with what we humans would do”. And it’s then a remarkable fact that artificial neural nets with fairly simple architectures can successfully do generalizations in a way that’s roughly aligned with human brains. So why does this work? Presumably it’s because there are universal features of “brain-like systems” that are close enough between human brains and neural nets. And once again it’s important to emphasize that what’s happening in both cases seems distinctly weaker than “general computation”.

\n

A feature of “general computation” is that it can potentially involve unbounded amounts of time and storage space. But both brains and typical neural nets have just a fixed number of neurons. And although both brains and LLMs in effect have an “outer loop” that can “recycle” output to input, it’s limited.

\n

And at least when it comes to brains, a key feature associated with this is the limit on “working memory”, i.e. memory that can readily be both read and written “in the course of a computation”. Bigger and more developed brains typically seem to support larger amounts of working memory. Adult humans can remember perhaps 5 or 7 “chunks” of data in working memory; for young children, and other animals, it’s less. Size of working memory (as we’ll discuss later) seems to be important in things like language capabilities. And the fact that it’s limited is no doubt one reason we can’t generally “run code in our brains”.

\n

As we try to reflect on what our brains do, we’re most aware of our stream of conscious thought. But that represents just a tiny fraction of all our neural activity. Most of the activity is much less like “thought” and much more like typical processes in nature, with lots of elements seemingly “doing their own thing”. We might think of this as an “ocean of unconscious neural activity”, from which a “thread of consensus thought” is derived. Usually—much like in an artificial neural net—it’s difficult to find much regularity in that “unconscious activity”. Though when one trains oneself enough to get to the point of being able to “do something without thinking about it”, that presumably happens by organizing some part of that activity.

\n

There’s always a question of what kinds of things we can learn. We can’t overcome computational irreducibility. But how broadly can we handle what’s computationally reducible? Artificial neural nets show a certain genericity in their operation: although some specific architectures are more efficient than others, it doesn’t seem to matter much whether the input they’re fed is images or text or numbers, or whatever. And for our brains it’s probably the same—though what we’ve normally experienced, and learned from, are the specific kinds of input the come from our eyes, ears, etc. And from these, we’ve ended up recognizing certain types of regularities—that we’ve then used to guide our actions, set up our environment, etc.

\n

And, yes, this plugs into certain pockets of computational reducibility in the world. But there’s always further one could go. And how that might work with brains bigger than ours is at the core of what we’re trying to discuss here.

\n

Language and Beyond

\n

At some level we can view our brains as serving to take the complexity of the world and extract from it a compressed representation that our finite minds can handle. But what is the structure of that representation? A central aspect of it is that it ignores many details of the original input (like particular configurations of pixels). Or, in other words, it effectively equivalences many different inputs together.

\n

But how then do we describe that equivalence class? Implementationally, say in a neural net, the equivalence class might correspond to an attractor to which many different initial conditions all evolve. In terms of the detailed pattern of activity in the neural net the attractor will typically be very hard to describe. But on a larger scale we can potentially just think of it as some kind of robust construct that represents a class of things—or what in terms of our process of thought we might describe as a “concept”.

\n

At the lowest level there’s all sorts of complicated neural activity in our brains—most of it mired in computational irreducibility. But the “thin thread of conscious experience” that we extract from this we can for many purposes treat as being made up of higher-level “units of thought”, or essentially “discrete concepts”.

\n

And, yes, it’s certainly our typical human experience that robust constructs—and particularly ones from which other constructs can be built—will be discrete. In principle one can imagine that there could be things like “robust continuous spaces of concepts” (“cat and dog and everything in between”). But we don’t have anything like the computational paradigm that shows us a consistent universal way that such things could fit together (there’s no robust analog of computation theory for real numbers, for example). And somehow the success of the computational paradigm—potentially all the way down to the foundations of the physical universe—doesn’t seem to leave much room for anything else.

\n

So, OK, let’s imagine that we can represent our thread of conscious experience in terms of concepts. Well, that’s close to saying that we’re using language. We’re “packaging up” the details of our neural activity into “robust elements” which we can think of as concepts—and which are represented in language essentially by words. And not only does this “packaging” into language give a robust way for different brains to communicate; it also gives a single brain a robust way to “remember” and “redeploy” thoughts.

\n

Within one brain one could imagine that one might be able to remember and “think” directly in terms of detailed low-level neural patterns. But no doubt the “neural environment” inside a brain is continually changing (not least because of its stream of sensory input). And so the only way to successfully “preserve a thought” across time is presumably to “package it up” in terms of robust elements, or essentially in terms of language. In other words, if we’re going to be able to consistently “think a particular thought” we probably have to formulate it in terms of something robust—like concepts.

\n

But, OK, individual concepts are one thing. But language—or at least human language—is based on putting together concepts in structured ways. One might take a noun (“cat”) and qualify it with an adjective (“black”) to form a phrase that’s in effect a finer-grained version of the concept represented by the noun. And in a rough approximation one can think of language as formed from trees of nested phrases like this. And insofar as the phrases are independent in their structure (i.e. “context free”), we can parse such language by recursively understanding each phrase in turn—with the constraint that we can’t do it if the nesting goes too deep for us to hold the necessary stack of intermediate steps in our working memory.

\n

An important feature of ordinary human language is that it’s ultimately presented in a sequential way. Even though it may consist of a nested tree of phrases, the words that are the leaves of that tree are spoken or written in a one-dimensional sequence. And, yes, the fact that this is how it works is surely closely connected to the fact that our brains construct a single thread of conscious experience.

\n

In the actuality of the few thousand human languages currently in use, there is considerable superficial diversity, but also considerable fundamental commonality. For example, the same parts of speech (noun, verb, etc.) typically show up, as do concepts like “subject” and “object”. But the details of how words are put together, and how things are indicated, can be fairly different. Sometimes nouns have case endings; sometimes there are separate prepositions. Sometimes verb tenses are indicated by annotating the verb; sometimes with extra words. And sometimes, for example, what would usually be whole phrases can be smooshed together into single words.

\n

It’s not clear to what extent commonalities between languages are the result of shared history, and to what extent they’re consequences either of the particulars of our human sensory experience of the world, or the particular construction of our brains. It’s not too hard to get something like concepts to emerge in experiments on training neural nets to pass data through a “bottleneck” that simulates a “mind-to-mind communication channel”. But how compositionality or grammatical structure might emerge is not clear.

\n

OK, but so what might change if we had bigger brains? If neural nets are a guide, one obvious thing is that we should be able to deal directly with a larger number of “distinct concepts”, or words. So what consequences would this have? Presumably one’s language would get “grammatically shallower”, in the sense that what would otherwise have had to be said with nested phrases could now be said with individual words. And presumably this would tend to lead to “faster communication”, requiring fewer words. But it would likely also lead to more rigid communication, with less ability to tweak shades of meaning, say by changing just a few words in a phrase. (And it would presumably also require longer training, to learn what all the words mean.)

\n

In a sense we have a preview of what it’s like to have more words whenever we deal with specialized versions of existing language, aimed say at particular technical fields. There are additional words of “jargon” available, that make certain things “faster to say” (but require longer to learn). And with that jargon comes a certain rigidity, in saying easily only what the jargon says, and not something slightly different.

\n

So how else could language be different with a bigger brain? With larger working memory, one could presumably have more deeply nested phrases. But what about more sophisticated grammatical structures, say ones that aren’t “context free”, in the sense that different nested phrases can’t be parsed separately? My guess is that this quickly devolves into requiring arbitrary computation—and runs into computational irreducibility. In principle it’s perfectly possible to have any program as the “message” one communicates. But if one has to run the program to “determine its meaning”, that’s in general going to involve computational irreducibility.

\n

And the point is that with our assumptions about what “brain-like systems” do, that’s something that’s out of scope. Yes, one can construct a system (even with neurons) that can do it. But not with the “single thread of decisions from sensory input” workflow that seems characteristic of brains. (There are finer gradations one could consider—like languages that are context sensitive but don’t require general computation. But the Principle of Computational Equivalence strongly suggests that the separation between nested context-free systems and ones associated with arbitrary computation is very thin, and there doesn’t seem to be any particular reason to expect that the capabilities of a bigger brain would land right there.)

\n

Said another way: the Principle of Computational Equivalence says it’s easy to have a system that can deal with arbitrary computation. It’s just that such a system is not “brain like” in its behavior; it’s more like a typical system we see in nature.

\n

OK, but what other “additional features” can one imagine, for even roughly “brain-like” systems? One possibility is to go beyond the idea of a single thread of experience, and to consider a multiway system in which threads of experience can branch and merge. And, yes, this is what we imagine happens at a low level in the physical universe, particularly in connection with quantum mechanics. And indeed it’s perfectly possible to imagine, for example, a “quantum-like” LLM system in which one generates a graph of different textual sequences. But just “scaling up the number of neurons” in a brain, without changing the overall architecture, won’t get to this. We have to have a different, multiway architecture. Where we have a “graph of consciousness” rather than a “stream of consciousness”, and where, in effect, we’re “thinking a graph of thoughts”, notably with thoughts themselves being able to branch and merge.

\n

In our practical use of language, it’s most often communicated in spoken or written form—effectively as a one-dimensional sequence of tokens. But in math, for example, it’s common to have a certain amount of 2D structure, and in general there are also all sorts of specialized (usually technical) diagrammatic representations in use, often based on using graphs and networks—as we’ll discuss in more detail below.

\n

But what about general pictures? Normally it’s difficult for us to produce these. But in generative AI systems it’s basically easy. So could we then imagine directly “communicating mental images” from one mind to another? Maybe as a practical matter some neural implant in our brain could aggregate neural signals from which a displayed image could be generated. But is there in fact something coherent that could be extracted from our brains in this way? Perhaps that can only happen after “consensus is formed”, and we’ve reduced things to a much thinner “thread of experience”. Or, in other words, perhaps the only robust way for us to “think about images” is in effect to reduce them to discrete concepts and language-like representations.

\n

But perhaps if we “had the hardware” to display images directly from our minds it’d be a different story. And it’s sobering to imagine that perhaps the reason cats and dogs don’t appear to have compositional language is just that they don’t “have the hardware” to talk like we do (and it’s too laborious for them to “type with their paws”, etc.). And, by analogy, that if we “had the hardware” for displaying images, we’d discover we could also “think very differently”.

\n

Of course, in some small ways we do have the ability to “directly communicate with images”, for example in our use of gestures and body language. Right now, these seem like largely ancillary forms of communication. But, yes, it’s conceivable that with bigger brains, they could be more.

\n

And when it comes to other animals the story can be different. Cuttlefish are notable for dynamically producing elaborate patterns on their skin—giving them in a sense the hardware to “communicate in pictures”. But so far as one can tell, they produce just a small number of distinct patterns—and certainly nothing like a “pictorial generalization of compositional language”. (In principle one could imagine that “generalized cuttlefish” could do things like “dynamically run cellular automata on their skin”, just like all sorts of animals “statically” do in the process of growth or development. But to decode such patterns—and thereby in a sense enable “communicating in programs”—would typically require irreducible amounts of computation that are beyond the capabilities of any standard brain-like system.)

\n

Sensors and Actuators

\n

We humans have raw inputs coming into our brains from a few million sensors distributed across our usual senses of touch, sight, hearing, taste and smell (together with balance, temperature, hunger, etc.). In most cases the detailed sensor inputs are not independent; in a typical visual scene, for example, neighboring pixels are highly correlated. And it doesn’t seem to take many layers of neurons in our brains to distill our typical sensory experience from pure pieces of “raw data” to what we might view as “more independent features”.

\n

Of course there’ll usually be much more in the raw data than just those features. But the “features” typically correspond to aspects of the data that we’ve “learned are useful to us”—normally connected to pockets of computational reducibility that exist in the environment in which we operate. Are the features we pick out all we’ll ever need? In the end, we typically want to derive a small stream of decisions or actions from all the data that comes in. But how many “intermediate features” do we need to get “good” decisions or actions?

\n

That really depends on two things. First, what our decisions and actions are like. And second, what our raw data is like. Early in the history of our species, everything was just about “indigenous human experience”: what the natural world is like, and what we can do with our bodies. But as soon as we were dealing with technology, that changed. And in today’s world we’re constantly exposed, for example, to visual input that comes not from the natural world, but, say, from digital displays.

\n

And, yes, we often try to arrange our “user experience” to align with what’s familiar from the natural world (say by having objects that stay unchanged when they’re moved across the screen). But it doesn’t have to be that way. And indeed it’s easy—even with simple programs—to generate for example visual images very different from what we’re used to. And in many such cases, it’s very hard for us to “tell what’s going on” in the image. Sometimes it’ll just “look too complicated”. Sometimes it’ll seem like it has pieces we should recognize, but we don’t:

\n
\n
\n

\n

When it’s “just too complicated”, that’s often a reflection of computational irreducibility. But when there are pieces we might “think we should recognize”, that can be a reflection of pockets of reducibility we’re just not familiar with. If we imagine a space of possible images—as we can readily produce with generative AI—there will be some that correspond to concepts (and words) we’re familiar with. But the vast majority will effectively lie in “interconcept space”: places where we could have concepts, but don’t, at least yet:

\n
\n
\n

\n

So what could bigger brains do with all this? Potentially they could handle more features, and more concepts. Full computational irreducibility will always in effect ultimately overpower them. But when it comes to handling pockets of reducibility, they’ll presumably be able to deal with more of them. So in the end, it’s very much as one might expect: a bigger brain should be able to track more things going on, “see more details”, etc.

\n

Brains of our size seem like they are in effect sufficient for “indigenous human experience”. But with technology in the picture, it’s perfectly possible to “overload” them. (Needless to say, technology—in the form of filtering, data analysis, etc.—can also reduce that overload, in effect taking raw input and bringing our actual experience of it closer to something “indigenous”.)

\n

It’s worth pointing out that while two brains of a given size might be able to “deal with the same number of features or concepts”, those features or concepts might be different. One brain might have learned to talk about the world in terms of one set of primitives (such as certain basic colors); another in terms of a different set of primitives. But if both brains are sampling “indigenous human experience” in similar environments one can expect that it should be possible to translate between these descriptions—just as it is generally possible to translate between things said in different human languages.

\n

But what if the brains are effectively sampling “different slices of reality”? What if one’s using technology to convert different physical phenomena to forms (like images) that we can “indigenously” handle? Perhaps we’re sensing different electromagnetic frequencies; perhaps we’re sensing molecular or chemical properties; perhaps we’re sensing something like fluid motion. The kinds of features that will be “useful” may be quite different in these different modalities. Indeed, even something as seemingly basic as the notion of an “object” may not be so relevant if our sensory experience is effectively of continuous fluid motion.

\n

But in the end, what’s “useful” will depend on what we can do. And once again, it depends on whether we’re dealing with “pure humans” (who can’t, for example, move like octopuses) or with humans “augmented by technology”. And here we start to see an issue that relates to the basic capabilities of our brains.

\n

As “pure humans”, we have certain “actuators” (basically in the form of muscles) that we can “indigenously” operate. But with technology it’s perfectly possible for us to use quite different actuators in quite different configurations. And as a practical matter, with brains like ours, we may not be able to make them work.

\n

For example, while humans can control helicopters, they never managed to control quadcopters—at least not until digital flight controllers could do most of the work. In a sense there were just too many degrees of freedom for brains like ours to deal with. Should bigger brains be able to do more? One would think so. And indeed one could imagine testing this with artificial neural nets. In millipedes, for example, their actual brains seem to support only a couple of patterns of motion of their legs (roughly, same phase vs. opposite phase). But one could imagine that with a bigger brain, all sorts of other patterns would become possible.

\n

Ultimately, there are two issues at stake here. The first is having a brain be able to “independently address” enough actuators, or in effect enough degrees of freedom. The second is having a brain be able to control those degrees of freedom. And for example with mechanical degrees of freedom there are again essentially issues of computational irreducibility. Looking at the space of possible configurations—say of millipede legs—does one effectively just have to trace the path to find out if, and how, one can get from one configuration to another? Or are there instead pockets of reducibility, associated with regularities in the space of configurations, that let one “jump ahead” and figure this out without tracing all the steps? It’s those pockets of reducibility that brains can potentially make use of.

\n

When it comes to our everyday “indigenous” experience of the world, we are used to certain kinds of computational reducibility, associated for example with familiar natural laws, say about motion of objects. But what if we were dealing with different experiences, associated with different senses?

\n

For example, imagine (as with dogs) that our sense of smell was better developed than our sense of sight—as reflected by more nerves coming into our brains from our noses than our eyes. Our description of the world would then be quite different, based for example not on geometry revealed by the line-of-sight arrival of light, but instead by the delivery of odors through fluid motion and diffusion—not to mention the probably-several-hundred-dimensional space of odors, compared to the red, green, blue space of colors. Once again there would be features that could be identified, and “concepts” that could be defined. But those might only be useful in an environment “built for smell” rather than one “built for sight”.

\n

And in the end, how many concepts would be useful? I don’t think we have any way to know. But it certainly seems as if one can be a successful “smell-based animal” with a smaller brain (presumably supporting fewer concepts) than one needs as a successful “sight-based animal”.

\n

One feature of “natural senses” is that they tend to be spatially localized: an animal basically senses things only where it is. (We’ll discuss the case of social organisms later.) But what if we had access to a distributed array of sensors—say associated with IoT devices? The “effective laws of nature” that one could perceive would then be different. Maybe there would be regularities that could be captured by a small number of concepts, but it seems more likely that the story would be more complicated, and that in effect one would “need a bigger brain” to be able to keep track of what’s going on, and make use of whatever pockets of reducibility might exist.

\n

There are somewhat similar issues if one imagines changing the timescales for sensory input. Our perception of space, for example, depends on the fact that light travels fast enough that in the milliseconds it takes our brain to register the input, we’ve already received light from everything that’s around us. But if our brains operated a million times faster (as digital electronics does) we’d instead be registering individual photons. And while our brains might aggregate these to something like what we ordinarily perceive, there may be all sorts of other (e.g. quantum optics) effects that would be more obvious.

\n

Abstraction

\n

The more abstractly we try to think, the harder it seems to get. But would it get easier if we had bigger brains? And might there perhaps be fundamentally higher levels of abstraction that we could reach—but only if we had bigger brains.

\n

As a way to approach such questions, let’s begin by talking a bit about the history of the phenomenon of abstraction. We might already say that basic perception involves some abstraction, capturing as it does a filtered version of the world as it actually is. But perhaps we reach a different level when we start to ask “what if?” questions, and to imagine how things in the world could be different than they are.

\n

But somehow when it comes to us humans, it seems as if the greatest early leap in abstraction was the invention of language, and the explicit delineation of concepts that could be quite far from our direct experience. The earliest written records tend to be rather matter of fact, mostly recording as they do events and transactions. But already there are plenty of signs of abstraction. Numbers independent of what they count. Things that should happen in the future. The concept of money.

\n

There seems to be a certain pattern to the development of abstraction. One notices that some category of things one sees many times can be considered similar, then one “packages these up” into a concept, often described by a word. And in many cases, there’s a certain kind of self amplification: once one has a word for something (as a modern example, say “blog”), it becomes easier for us to think about the thing, and we tend to see it or make it more often in the world around us. But what really makes abstraction take off is when we start building a whole tower of it, with one abstract concept recursively being based on others.

\n

Historically this began quite slowly. And perhaps it was seen first in theology. There were glimmerings of it in things like early (syllogistic) logic, in which one started to be able to talk about the form of arguments, independent of their particulars. And then there was mathematics, where computations could be done just in terms of numbers, independent of where those numbers came from. And, yes, while there were tables of “raw computational results”, numbers were usually discussed in terms of what they were numbers of. And indeed when it came to things like measures of weight, it took until surprisingly modern times for there to be an absolute, abstract notion of weight, independent of whether it was a weight of figs or of wool.

\n

The development of algebra in the early modern period can be considered an important step forward in abstraction. Now there were formulas that could be manipulated abstractly, without even knowing what particular numbers x stood for. But it would probably be fair to say that there was a major acceleration in abstraction in the 19th century—with the development of formal systems that could be discussed in “purely symbolic form” independent of what they might (or might not) “actually represent”.

\n

And it was from this tradition that modern notions of computation emerged (and indeed particularly ones associated with symbolic computation that I personally have extensively used). But the most obvious area in which towers of abstraction have been built is mathematics. One might start with numbers (that could count things). But soon one’s on to variables, functions, spaces of functions, category theory—and a zillion other constructs that abstractly build on each other.

\n

The great value of abstraction is that it allows one to think about large classes of things all at once, instead of each separately. But how do those abstract concepts fit together? The issue is that often it’s in a way that’s very remote from anything about which we have direct experience from our raw perception of the world. Yes, we can define concepts about transfinite numbers or higher categories. But they don’t immediately relate to anything we’re familiar with from our everyday experience.

\n

As a practical matter one can often get a sense of how high something is on the tower of abstraction by seeing how much one has to explain to build up to it from “raw experiential concepts”. Just sometimes it turns out that actually, once one hears about a certain seemingly “highly abstract” concept, one can actually explain it surprisingly simply, without going through the whole historical chain that led to it. (A notable example of this is the concept of universal computation—which arose remarkably late in human intellectual history, but is now quite easy to explain, albeit particularly given its actual widespread embodiment in technology.) But the more common case is that there’s no choice but to explain a whole tower of concepts.

\n

At least in my experience, however, when one actually thinks about “highly abstract” things, one does it by making analogies to more familiar, more concrete things. The analogies may not be perfect, but they provide scaffolding which allows our brains to take what would otherwise be quite inaccessible steps.

\n

At some level any abstraction is a reflection of a pocket of computational reducibility. Because if a useful abstraction can be defined, what it means is that it’s possible to say something in a “summarized” or reduced way, in effect “jumping ahead”, without going through all the computational steps or engaging with all the details. And one can then think of towers of abstraction as being like networks of pockets of computational reducibility. But, yes, it can be hard to navigate these.

\n

Underneath, there’s lots of computational irreducibility. And if one is prepared to “go through all the steps” one can often “get to an answer” without all the “conceptual difficulty” of complex abstractions. But while computers can often readily “go through all the steps”, brains can’t. And that’s in a sense why we have to use abstraction. But inevitably, even if we’re using abstraction, and the pockets of computational reducibility associated with it, there’ll be shadows of the computational irreducibility underneath. And in particular, if we try to “explore everything”, our network of pockets of reducibility will inevitably “get complicated”, and ultimately also be mired in computational irreducibility, albeit with “higher-level” constructs than in the computational irreducibility underneath.

\n

No finite brain will ever be able to “go all the way”, but it starts to seem likely that a bigger brain will be able to “reach further” in the network of abstraction. But what will it find there? How does the character of abstraction change when we take it further? We’ll be able to discuss this a bit more concretely when we talk about computational language below. But perhaps the main thing to say now is that—at least in my experience—most higher abstractions don’t feel as if they’re “structurally different” once one understands them. In other words, most of the time, it seems as if the same patterns of thought and reasoning that one’s applied in many other places can be applied there too, just to different kinds of constructs.

\n

Sometimes, though, there seem to be exceptions. Shocks to intuition that seem to separate what one’s now thinking about from anything one’s thought before. And, for example, for me this happened when I started looking broadly at the computational universe. I had always assumed that simple rules would lead to simple behavior. But many years ago I discovered that in the computational universe this isn’t true (hence computational irreducibility). And this led to a whole different paradigm for thinking about things.

\n

It feels a bit like in metamathematics. Where one can imagine one type of abstraction associated with different constructs out of which to form theorems. But where somehow there’s another level associated with different ways to build new theorems, or indeed whole spaces of theorems. Or to build proofs from proofs, or proofs from proofs of proofs, etc. But the remarkable thing is that there seems to be an ultimate construct that encompasses it all: the ruliad.

\n

We can describe the ruliad as the entangled limit of all possible computations. But we can also describe it as the limit of all possible abstractions. And it seems to lie underneath all physical reality, as well as all possible mathematics, etc. But, we might ask, how do brains relate to it?

\n

Inevitably, it’s full of computational irreducibility. And looked at as a whole, brains can’t get far with it. But the key idea is to think about how brains as they are—with all their various features and limitations—will “parse” it. And what I’ve argued is that what “brains as they are” will perceive about the ruliad are the core laws of physics (and mathematics) as we know them. In other words, it’s because brains are the way they are that we perceive the laws of physics that we perceive.

\n

Would it be different for bigger brains? Not if they’re the “same kind of brains”. Because what seems to matter for the core laws of physics are really just two properties of observers. First, that they’re computationally bounded. And second, that they believe they are persistent in time, and have a single thread of experience through time. And both of these seem to be core features of what makes brains “brain-like”, rather than just arbitrary computational systems.

\n

It’s a remarkable thing that just these features are sufficient to make core laws of physics inevitable. But if we want to understand more about the physics we’ve constructed—and the laws we’ve deduced—we probably have to understand more about what we’re like as observers. And indeed, as I’ve argued elsewhere, even our physical scale (much bigger than molecules, much smaller than the whole universe) is for example important in giving us the particular experience (and laws) of physics that we have.

\n

Would this be different with bigger brains? Perhaps a little. But anything that something brain-like can do pales in comparison to the computational irreducibility that exists in the ruliad and in the natural world. Nevertheless, with every new pocket of computational reducibility that’s reached we get some new abstraction about the world, or in effect, some new law about how the world works.

\n

And as a practical matter, each such abstraction can allow us to build a whole collection of new ways of thinking about the world, and making things in the world. It’s challenging to trace this arc. Because in a sense it’ll all be about “things we never thought to think about before”. Goals we might define for ourselves that are built on a tower of abstraction, far away from what we might think of as “indigenous human goals”.

\n

It’s important to realize that there won’t just be one tower of abstraction that can be built. There’ll inevitably be an infinite network of pockets of computational reducibility, with each path leading to a different specific tower of abstraction. And indeed the abstractions we have pursued reflect the particular arc of human intellectual history. Bigger brains—or AIs—have many possible directions they can go, each one defining a different path of history.

\n

One question to ask is to what extent reaching higher levels of abstraction is a matter of education, and to what extent it requires additional intrinsic capabilities of a brain. It is, I suspect, a mixture. Sometimes it’s really just a question of knowing “where that pocket of reducibility is”, which is something we can learn from education. But sometimes it’s a question of navigating a network of pockets, which may only be possible when brains reach a certain level of “computational ability”.

\n

There’s another thing to discuss, related to education. And that’s the fact that over time, more and more “distinct pieces of knowledge” get built up in our civilization. There was perhaps a time in history when a brain of our size could realistically commit to memory at least the basics of much of that knowledge. But today that time has long passed. Yes, abstraction in effect compresses what one needs to know. But the continual addition of new and seemingly important knowledge, across countless specialties, makes it impossible for brains of our size to keep up.

\n

Plenty of that knowledge is, though, quite siloed in different areas. But sometimes there are “grand analogies” to make—say pulling an idea from relativity theory and applying it to biological evolution. In a sense such analogies reveal new abstractions—but to make them requires knowledge that spans many different areas. And that’s a place where bigger brains—or AIs—can potentially do something that’s in a fundamental way “beyond us”.

\n

Will there always be such “grand analogies” to make? The general growth of knowledge is inevitably a computationally irreducible process. And within it there will inevitably be pockets of reducibility. But how often in practice will one actually encounter “long-range connections” across “knowledge space”? As a specific example one can look at metamathematics, where such connections are manifest in theorems that link seemingly different areas of mathematics. And this example leads one to realize that at some deep level grand analogies are in a sense inevitable. In the context of the ruliad, one can think of different domains of knowledge as corresponding to different parts. But the nature of the ruliad—encompassing as it does everything that is computationally possible—inevitably imbues it with a certain homogeneity, which implies that (as the Principle of Computational Equivalence might suggest) there must ultimately be a correspondence between different areas. In practice, though, this correspondence may be at a very “atomic” (or “formal”) level, far below the kinds of descriptions (based on pockets of reducibility) that we imagine brains normally use.

\n

But, OK, will it always take an “expanding brain” to keep up with the “expanding knowledge” we have? Computational irreducibility guarantees that there’ll always in principle be “new knowledge” to be had—separated from what’s come before by irreducible amounts of computation. But then there’s the question of whether in the end we’ll care about it. After all, it could be that the knowledge we can add is so abstruse that it will never affect any practical decisions we have to make. And, yes, to some extent that’s true (which is why only some tiny fraction of the Earth’s population will care about what I’m writing here). But another consequence of computational irreducibility is that there will always be “surprises”—and those can eventually “push into focus” even what at first seems like arbitrarily obscure knowledge.

\n

Computational Language

\n

Language in general—and compositional language in particular—is arguably the greatest invention of our species. But is it somehow “the top”—the highest possible representation of things? Or if, for example, we had bigger brains, is there something beyond it that we could reach?

\n

Well, in some very formal sense, yes, compositional language (at least in idealized form) is “the top”. Because—at least if it’s allowed to include utterances of any length—then in some sense it can in principle encode arbitrary, universal computations. But this really isn’t true in any useful sense—and indeed to apply ordinary compositional language in this way would require doing computationally irreducible computations.

\n

So we return to the question of what might in practice lie beyond ordinary human language. I wondered about this for a long time. But in the end I realized that the most important clue is in a sense right in front of me: the concept of computational language, that I’ve spent much of my life exploring.

\n

It’s worth saying at the outset that the way computational language plays out for computers and for brains is somewhat different, and in some respects complementary. In computers you might specify something as a Wolfram Language symbolic expression, and then the “main action” is to evaluate this expression, potentially running a long computation to find out what the expression evaluates to.

\n

Brains aren’t set up to do long computations like this. For them a Wolfram Language expression is something to use in effect as a “representation of a thought”. (And, yes, that’s an important distinction between the computational language concept of Wolfram Language, and standard “programming languages”, which are intended purely as a way to tell a computer what to do, not a way to represent thoughts.)

\n

So what kinds of thoughts can we readily represent in our computational language? There are ones involving explicit numbers, or mathematical expressions. There are ones involving cities and chemicals, and other real-world entities. But then there are higher-level ones, that in effect describe more abstract structures.

\n

For example, there’s NestList, which gives the result of nesting any operation, here named f:

\n
\n
\n

\n

At the outset, it’s not obvious that this would be a useful thing to do. But in fact it’s a very successful abstraction: there are lots of functions f for which one wants to do this.

\n

In the development of ordinary human language, words tend to get introduced when they’re useful, or, in other words, when they express things one often wants to express. But somehow in human language the words one gets tend to be more concrete. Maybe they describe something that directly happens to objects in the world. Maybe they describe our impression of a human mental state. Yes, one can make rather vague statements like “I’m going to do something to someone”. But human language doesn’t normally “go meta”, doing things like NestList where one’s saying that one wants to take some “direct statement” and in effect “work with the statement”. In some sense, human language tends to “work with data”, applying a simple analog of code to it. Our computational language can “work with code” as “raw material”.

\n

One can think about this as a “higher-order function”: a function that operates not on data, but on functions. And one can keep going, dealing with functions that operate on functions that operate on functions, and so on. And at every level one is increasing the generality—and abstraction—at which one is working. There may be many specific functions (a bit analogous to verbs) that operate on data (a bit analogous to nouns). But when we talk about operating on functions themselves we can potentially have just a single function (like NestList) that operates, quite generally, on many functions. In ordinary language, we might call such things “metaverbs”, but they aren’t something that commonly occurs.

\n

But what makes them possible in computational language? Well, it’s taking the computational paradigm seriously, and representing everything in computational terms: objects, actions, etc. In Wolfram Language, it’s that we can represent everything as a symbolic expression. Arrays of numbers (or countries, or whatever) are symbolic expressions. Graphics are symbolic expressions. Programs are symbolic expressions. And so on.

\n

And given this uniformity of representation it becomes feasible—and natural—to do higher-order operations, that in effect manipulate symbolic structure without being concerned about what the structure might represent. At some level we can view this as leading to the ultimate abstraction embodied in the ruliad, where in a sense “everything is pure structure”. But in practice in Wolfram Language we try to “anchor” what we’re doing to known concepts from ordinary human language—so that we use names for things (like NestList) that are derived from common English words.

\n

In some formal sense this isn’t necessary. Everything can be “purely structural”, as it is not only in the ruliad but also in constructs like combinators, where, say, the operation of addition can be represented by:

\n
\n
\n

\n

Combinators have been around for more than a century. But they are almost impenetrably difficult for most humans to understand. Somehow they involve too much “pure abstraction”, not anchored to concepts we “have a sense of” in our brains.

\n

It’s been interesting for me to observe over the years what it’s taken for people (including myself) to come to terms with the kind of higher-order constructs that exist in the Wolfram Language. The typical pattern is that over the course of months or years one gets used to lots of specific cases. And only after that is one able—often in the end rather quickly—to “get to the next level” and start to use some generalized, higher-order construct. But normally one can in effect only “go one level at a time”. After one groks one level of abstraction, that seems to have to “settle” for a while before one can go on to the next one.

\n

Somehow it seems as if one is gradually “feeling out” a certain amount of computational irreducibility, to learn about a new pocket of reducibility, that one can eventually use to “think in terms of”.

\n

Could “having a bigger brain” speed this up? Maybe it’d be useful to be able to remember more cases, and perhaps get more into “working memory”. But I rather suspect that combinators, for example, are in some sense fundamentally beyond all brain-like systems. It’s much as the Principle of Computational Equivalence suggests: one quickly “ascends” to things that are as computationally sophisticated as anything—and therefore inevitably involve computational irreducibility. There are only certain specific setups that remain within the computationally bounded domain that brain-like systems can deal with.

\n

Of course, even though they can’t directly “run code in their brains”, humans—and LLMs—can perfectly well use Wolfram Language as a tool, getting it to actually run computations. And this means they can readily “observe phenomena” that are computationally irreducible. And indeed in the end it’s very much the same kind of thing observing such phenomena in the abstract computational universe, and in the “real” physical universe. And the point is that in both cases, brain-like systems will pull out only certain features, essentially corresponding to pockets of computational reducibility.

\n

How do things like higher-order functions relate to this? At this point it’s not completely clear. Presumably in at least some sense there are hierarchies of higher-order functions that capture certain kinds of regularities that can be thought of as associated with networks of computational reducibility. And it’s conceivable that category theory and its higher-order generalizations are relevant here. In category theory one imagines applying sequences of functions (“morphisms”) and it’s a foundational assumption that the effect of any sequence of functions can also be represented by just a single function—which seems tantamount to saying that one can always “jump ahead”, or in other words, that everything one’s dealing with is computationally reducible. Higher-order category theory then effectively extends this to higher-order functions, but always with what seem like assumptions of computational reducibility.

\n

And, yes, this all seems highly abstract, and difficult to understand. But does it really need to be, or is there some way to “bring it down” to a level that’s close to everyday human thinking? It’s not clear. But in a sense the core art of computational language design (that I’ve practiced so assiduously for nearly half a century) is precisely to take things that at first might seem abstruse, and somehow cast them into an accessible form. And, yes, this is something that’s about as intellectually challenging as anything—because in a sense it involves continually trying to “figure out what’s really going on”, and in effect “drilling down” to get to the foundations of everything.

\n

But, OK, when one gets there, how simple will things be? Part of that depends on how much computational irreducibility is left when one reaches what one considers to be “the foundations”. And part in a sense depends on the extent to which one can “find a bridge” between the foundations and something that’s familiar. Of course, what’s “familiar” can change. And indeed over the four decades that I’ve been developing the Wolfram Language quite a few things (particularly in areas like functional programming) that at first seemed abstruse and unfamiliar have begun to seem more familiar. And, yes, it’s taken the collective development and dissemination of the relevant ideas to achieve that. But now it “just takes education”; it doesn’t “take a bigger brain” to deal with these things.

\n

One of the core features of the Wolfram Language is that it represents everything as a symbolic expression. And, yes, symbolic expressions are formally able to represent any kind of computational structure. But beyond that, the important point is that they’re somehow set up to be a match for how brains work.

\n

And in particular, symbolic expressions can be thought of “grammatically” as consisting of nested functions that form a tree-like structure; effectively a more precise version of the typical kind of grammar that we find in human language. And, yes, just as we manage to understand and generate human language with a limited working memory, so (at least at the grammatical level) we can do the same thing with computational language. In other words, in dealing with Wolfram Language we’re leveraging our faculties with human language. And that’s why Wolfram Language can serve as such an effective bridge between the way we think about things, and what’s computationally possible.

\n

But symbolic expressions represented as trees aren’t the only conceivable structures. It’s also possible to have symbolic expressions where the elements are nodes on a graph, and the graph can even have loops in it. Or one can go further, and start talking, for example, about the hypergraphs that appear in our Physics Project. But the point is that brain-like systems have a hard time processing such structures. Because to keep track of what’s going on they in a sense have to keep track of multiple “threads of thought”. And that’s not something individual brain-like systems as we current envision them can do.

\n

Many Brains Together: The Formation of Society

\n

As we’ve discussed several times here, it seems to be a key feature of brains that they create a single “thread of experience”. But what would it be like to have multiple threads? Well, we actually have a very familiar example of that: what happens when we have a whole collection of people (or other animals).

\n

One could imagine that biological evolution might have produced animals whose brains maintain multiple simultaneous threads of experience. But somehow it has ended up instead restricting each animal to just one thread of experience—and getting multiple threads by having multiple animals. (Conceivably creatures like octopuses may actually in some sense support multiple threads within one organism.)

\n

Within a single brain it seems important to always “come to a single, definite conclusion”—say to determine where an animal will “move next”. But what about in a collection of organisms? Well, there’s still some kind of coordination that will be important to the fitness of the whole population—perhaps even something as direct as moving together as a herd or flock. And in a sense, just as all those different neuron firings in one brain get collected to determine a “final conclusion for what to do”, so similarly the conclusions of many different brains have to be collected to determine a coordinated outcome.

\n

But how can a coordinated outcome arise? Well, there has to be communication of some sort between organisms. Sometimes it’s rather passive (just watch what your neighbor in a herd or flock does). Sometimes it’s something more elaborate and active—like language. But is that the best one can do? One might imagine that there could be some kind of “telepathic coordination”, in which the raw pattern of neuron firings is communicated from one brain to another. But as we’ve argued, such communication cannot be expected to be robust. To achieve robustness, one must “package up” all the internal details into some standardized form of communication (words, roars, calls, etc.) that one can expect can be “faithfully unpacked” and in effect “understood” by other, suitably similar brains.

\n

But it’s important to realize that the very possibility of such standardized communication in effect requires coordination. Because somehow what goes on in one brain has to be aligned with what goes on in another. And indeed the way that’s maintained is precisely through continual communication.

\n

So, OK, how might bigger brains affect this? One possibility is that they might enable more complex social structures. There are plenty of animals with fairly small brains that successfully form “all do the same thing” flocks, herds and the like. But the larger brains of primates seem to allow more complex “tribal” structures. Could having a bigger brain let one successfully maintain a larger social structure, in effect remembering and handling larger numbers of social connections? Or could the actual forms of these connections be more complex? While human social connections seem to be at least roughly captured by social networks represented as ordinary graphs, maybe bigger brains would for example routinely require hypergraphs.

\n

But in general we can say that language—or standardized communication of some form—is deeply connected to the existence of a “coherent society”. For without being able to exchange something like language there’s no way to align the members of a potential society. And without coherence between members something like language won’t be useful.

\n

As in so many other situations, one can expect that the detailed interactions between members of a society will show all sorts of computational irreducibility. And insofar as one can identify “the will of society” (or, for that matter, the “tide of history”), it represents a pocket of computational reducibility in the system.

\n

In human society there is a considerable tendency (though it’s often not successful) to try to maintain a single “thread of society”, in which, at some level, everyone is supposed to act more or less the same. And certainly that’s an important simplifying feature in allowing brains like ours to “navigate the social world”. Could bigger brains do something more sophisticated? As in other areas, one can imagine a whole network of regularities (or pockets of reducibility) in the structure of society, perhaps connected to a whole tower of “higher-order social abstractions”, that only brains bigger than ours can comfortably deal with. (“Just being friends” might be a story for the “small brained”. With bigger brains one might instead have patterns of dependence and connectivity that can only be represented in complicated graph theoretic ways.)

\n

Minds beyond Ours

\n

We humans have a tremendous tendency to think—or at least hope—that our minds are somehow “at the top” of what’s possible. But with what we know now about computation and how it operates in the natural world it’s pretty clear this isn’t true. And indeed it seems as if it’s precisely a limitation in the “computational architecture” of our minds—and brains—that leads to that most cherished feature of our existence that we characterize as “conscious experience”.

\n

In the natural world at large, computation is in some sense happening quite uniformly, everywhere. But our brains seem to be set up to do computation in a more directed and more limited way—taking in large amounts of sensory data, but then filtering it down to a small stream of actions to take. And, yes, one can remove this “limitation”. And while the result may lead to more computation getting done, it doesn’t lead to something that’s “a mind like ours”.

\n

And indeed in what we’ve done here, we’ve tended to be very conservative in how we imagine “extending our minds”. We’ve mostly just considered what might happen if our brains were scaled up to have more neurons, while basically maintaining the same structure. (And, yes, animals physically bigger than us already have larger brains—as did Neanderthals—but what we really need to look at is size of brain relative to size of the animal, or, in effect “amount of brain for a given amount of sensory input”.)

\n

A certain amount about what happens with different scales of brains is already fairly clear from looking at different kinds of animals, and at things like their apparent lack of human-like language. But now that we have artificial neural nets that do remarkably human-like things we’re in a position to get a more systematic sense of what different scales of “brains” can do. And indeed we’ve seen a sequence of “capability thresholds” passed as neural nets get larger.

\n

So what will bigger brains be able to do? What’s fairly straightforward is that they’ll presumably be able to take larger amounts of sensory input, and generate larger amounts of output. (And, yes, the sensory input could come from existing modalities, or new ones, and the outputs could go to existing “actuators”, or new ones.) As a practical matter, the more “data” that has to be processed for a brain to “come to a decision” and generate an output, the slower it’ll probably be. But as brains get bigger, so presumably will the size of their working memory—as well as the number of distinct “concepts” they can “distinguish” and “remember”.

\n

If the same overall architecture is maintained, there’ll still be just a single “thread of experience”, associated with a single “thread of communication”, or a single “stream of tokens”. At the size of brains we have, we can deal with compositional language in which “concepts” (represented, basically, as words) can have at least a certain depth of qualifiers (corresponding, say, to adjectival phrases). As brain size increases, we can expect there can both be more “raw concepts”—allowing fewer qualifiers—as well as more working memory to deal with more deeply nested qualifiers.

\n

But is there something qualitatively different that can happen with bigger brains? Computational language (and particularly my experience with the Wolfram Language) gives some indications, the most notable of which is the idea of “going meta” and using “higher-order constructs”. Instead of, say, operating directly on “raw concepts” with (say, “verb-like”) “functions”, we can imagine higher-order functions that operate on functions themselves. And, yes, this is something of which we see powerful examples in the Wolfram Language. But it feels as if we could somehow go further—and make this more routine—if our brains in a sense had “more capacity”.

\n

To “go meta” and “use higher-order constructs” is in effect a story of abstraction—and of taking many disparate things and abstracting to the point where one can “talk about them all together”. The world at large is full of complexity—and computational irreducibility. But in essence what makes “minds like ours” possible is that there are pockets of computational reducibility to be found. And those pockets of reducibility are closely related to being able to successfully do abstraction. And as we build up towers of abstraction we are in effect navigating through networks of pockets of computational reducibility.

\n

The progress of knowledge—and the fact that we’re educated about it—lets us get to a certain level of abstraction. And, one suspects, the more capacity there is in a brain, the further it will be able to go.

\n

But where will it “want to go”? The world at large—full as it is with computational irreducibility, along with infinite numbers of pockets of reducibility—leaves infinite possibilities. And it is largely the coincidence of our particular history that defines the path we have taken.

\n

We often identify our “sense of purpose” with the path we will take. And perhaps the definiteness of our belief in purpose is related to the particular feature of brains that leads us to concentrate “everything we’re thinking” down into just a single stream of decisions and action.

\n

And, yes, as we’ve discussed, one could in principle imagine “multiway minds” with multiple “threads of consciousness” operating at once. But we humans (and individual animals in general) don’t seem to have those. Of course, in collections of humans (or other animals) there are still inevitably multiple “threads of consciousness” —and it’s things like language that “knit together” those threads to, for example, make a coherent society.

\n

Quite what that “knitting” looks like might change as we scale up the size of brains. And so, for example, with bigger brains we might be able to deal with “higher-order social structures” that would seem alien and incomprehensible to us today.

\n

So what would it be like to interact with a “bigger brain”? Inside, that brain might effectively use many more words and concepts than we know. But presumably it could generate at least a rough (“explain-like-I’m-5”) approximation that we’d be able to understand. There might well be all sorts of abstractions and “higher-order constructs” that we are basically blind to. And, yes, one is reminded of something like a dog listening to a human conversation about philosophy—and catching only the occasional “sit” or “fetch” word.

\n

As we’ve discussed several times here, if we remove our restriction to “brain-like” operation (and in particular to deriving a small stream of decisions from large amounts of sensory input) we’re thrown into the domain of general computation, where computational irreducibility is rampant, and we can’t in general expect to say much about what’s going on. But if we maintain “brain-like operation”, we’re instead in effect navigating through “networks of computational reducibility”, and we can expect to talk about things like concepts, language and towers of abstraction.

\n

From a foundational point of view, we can imagine any mind as in effect being at a particular place in the ruliad. When minds communicate, they are effectively exchanging the rulial analog of particles—robust concepts that are somehow unchanged as they propagate within the ruliad. So what would happen if we had bigger brains? In a sense it’s a surprisingly “mechanical” story: a bigger brain—encompassing more concepts, etc.—in effect just occupies a larger region of rulial space. And the presence of abstraction—perhaps learned from a whole arc of intellectual history—can lead to more expansion in rulial space.

\n

And in the end it seems that “minds beyond ours” can be characterized by how large the regions of the ruliad they occupy are. (Such minds are, in some very literal rulial sense, more “broad minded”.) So what is the limit of all this? Ultimately, it’s a “mind” that spans the whole ruliad, and in effect incorporates all possible computations. But in some fundamental sense this is not a mind like ours, not least because by “being everything” it “becomes nothing”—and one can no longer identify it as having a coherent “thread of individual existence”.

\n

And, yes, the overall thrust of what we’ve been saying applies just as well to “AI minds” as to biological ones. If we remove restrictions like being set up to generate the next token, we’ll be left with a neural net that’s just “doing computation”, with no obvious “mind-like purpose” in sight. But if we make neural nets do typical “brain-like” tasks, then we can expect that they too will find and navigate pockets of reducibility. We may well not recognize what they’re doing. But insofar as we can, then inevitably we’ll mostly be sampling the parts of “minds beyond ours” that are aligned with “minds like ours”. And it’ll take progress in our whole human intellectual edifice to be able to fully appreciate what it is that minds beyond ours can do.

\n

Thanks for recent discussions about topics covered here in particular to Richard Assar, Joscha Bach, Kovas Boguta, Thomas Dullien, Dugan Hammock, Christopher Lord, Fred Meinberg, Nora Popescu, Philip Rosedale, Terry Sejnowski, Hikari Sorensen, and James Wiles.

\n", + "category": "Artificial Intelligence", + "link": "https://writings.stephenwolfram.com/2025/05/what-if-we-had-bigger-brains-imagining-minds-beyond-ours/", "creator": "Stephen Wolfram", - "pubDate": "Tue, 09 Jan 2024 22:33:01 +0000", - "enclosure": "https://content.wolfram.com/sites/43/2024/01/stream-plot-small.mp4", - "enclosureType": "video/mp4", - "image": "https://content.wolfram.com/sites/43/2024/01/stream-plot-small.mp4", + "pubDate": "Wed, 21 May 2025 14:28:31 +0000", + "enclosure": "", + "enclosureType": "", + "image": "", "id": "", "language": "en", "folder": "", @@ -84,7 +84,249 @@ "favorite": false, "created": false, "tags": [], - "hash": "8e9ed31ddb65ef517482505f1b29daef", + "hash": "2841357beeb72f8b939e88b179422b99", + "highlights": [] + }, + { + "title": "What Can We Learn about Engineering and Innovation from Half a Century of the Game of Life Cellular Automaton?", + "description": "\"\"Metaengineering and Laws of Innovation Things are invented. Things are discovered. And somehow there’s an arc of progress that’s formed. But are there what amount to “laws of innovation” that govern that arc of progress? There are some exponential and other laws that purport to at least measure overall quantitative aspects of progress (number of […]", + "content": "\"\"

\"What

\n

Metaengineering and Laws of Innovation

\n

Things are invented. Things are discovered. And somehow there’s an arc of progress that’s formed. But are there what amount to “laws of innovation” that govern that arc of progress?

\n

There are some exponential and other laws that purport to at least measure overall quantitative aspects of progress (number of transistors on a chip; number of papers published in a year; etc.). But what about all the disparate innovations that make up the arc of progress? Do we have a systematic way to study those?

\n

We can look at the plans for different kinds of bicycles or rockets or microprocessors. And over the course of years we’ll see the results of successive innovations. But most of the time those innovations won’t stay within one particular domain—say shapes of bicycle frames. Rather they’ll keep on pulling in innovations from other domains—say, new materials or new manufacturing techniques. But if we want to get closer to the study of the pure phenomenon of innovation we need a case where—preferably over a long period of time—everything that happens can be described in a uniform way within a single narrowly defined framework.

\n

Well, some time ago I realized that, actually, yes, there is such a case—and I’ve even personally been following it for about half a century. It’s the effort to build “engineering” structures within the Game of Life cellular automaton. They might serve as clocks, wires, logic gates, or things that generate digits of π. But the point is that they’re all just patterns of bits. So when we talk about innovation in this case, we’re talking about the rather pure question of how patterns of bits get invented, or discovered.

\n

As a long-time serious researcher of the science of cellular automata (and of what they generically do), I must say I’ve long been frustrated by how specific, whimsical and “non-scientific” the things people do with the Game of Life have often seemed to me to be. But what I now realize is that all that detail and all that hard work have now created what amounts to a unique dataset of engineering innovation. And my goal here is to do what one can call “metaengineering”—and to study in effect what happened in that process of engineering over the nearly six decades since the Game of Life was invented.

\n

We’ll see in rather pure form many phenomena that are at least anecdotally familiar from our overall experience of progress and innovation. Most of the time, the first step is to identify an objective: some purpose one can describe and wants to achieve. (Much more rarely, one instead observes something that happens, then realizes there’s a way one can meaningfully make use of it.) But starting from an objective, one either takes components one has, and puts human effort into arranging them to “invent” something that will achieve the objective—or in effect (usually at least somewhat systematically, and automatically) one searches to try to “discover” new ways to achieve the objective.

\n

As we explore what’s been done with the Game of Life we’ll see occasional sudden advances—together with much larger amounts of incremental progress. We’ll see towers of technology being built, and we’ll see old, rather simple technology being used to achieve new objectives. But most of all, we’ll see an interplay between what gets discovered by searching possibilities—and what gets invented by explicit human effort.

\n

The Principle of Computational Equivalence implies that there is, in a sense, infinite richness to what a computational system like the Game of Life can ultimately do—and it’s the role of science to explore this richness in all its breadth. But when it comes to engineering and technology the crucial question is what we choose to make the system do—and what paths we follow to get there. Inevitably, some of this is determined by the underlying computational structure of the system. But much of it is a reflection of how we, as humans, do things, and the patterns of choices we make. And that’s what we’ll be able to study—at quite large scale—by looking at the nearly six decades of work on the Game of Life.

\n

How similar are the results of such “purposeful engineering” to the results of “blind” adaptive evolution of the kind that occurs in biology? I recently explored adaptive evolution (as it happens, using cellular automata as a model) and saw that it can routinely deliver what seem like “sequences of new ideas”. But now in the example of the Game of Life we have what we can explicitly identify as “sequences of new ideas”. And so we’re in a position to compare the results of human effort (aided, in many cases, by systematic search) with what we can “automatically” do by the algorithmic process of adaptive evolution.

\n

In the end, we can think of the set of things that we can in principle engineer as being laid out in a kind of “metaengineering space”, much as we can think of mathematical theorems we can prove as being laid out in metamathematical space. In the mathematical case (notwithstanding some of my own work) the vast majority of theorems have historically been found purely by human effort. But, as we’ll see below, in Game-of-Life engineering it’s been a mixture of human effort and fairly automated exploration of metaengineering space. Though—much like in traditional mathematics—we’ve still in a sense always only pursuing objectives we’ve already conceptualized. And in this way what we’re doing is very different from what I’ve done for so long in studying the science (or, as I would now say, the ruliology) of what computational systems like cellular automata (of which the Game of Life is an example) do “in the wild”, when they’re unconstrained by objectives we’re trying to achieve with them.

\n

The Nature of the Game of Life

\n

Here’s a typical example of what it looks like to run the Game of Life:

\n
\n
\n

\n

There’s a lot of complicated—and hard to understand—stuff going on here. But there are still some recognizable structures—like the “blinkers” that alternate on successive steps

\n
\n
\n

\n

and the “gliders” that steadily move across the screen:

\n
\n
\n

\n

Seeing these structures might make one think that one should be able to “do engineering” in the Game of Life, setting up patterns that can ultimately do all sorts of things. And indeed our main subject here is the actual development of such engineering over the past nearly six decades since the introduction of the Game of Life.

\n

What we’ll be concentrating on is essentially the “technology” of the Game of Life: how we take the “raw material” that the Game of Life provides, and make from it “meaningful engineering structures”.

\n

But what about the science of the Game of Life? What can we say about what the Game of Life “naturally does”, independent of “useful” structures we create in it? The vast majority of the effort that’s been put into the Game of Life over the past half century hasn’t been about this. But this type of fundamental question is central to what one asks in what I now call ruliologya kind of science that I’ve been energetically pursuing since the early 1980s.

\n

Ruliology looks in general at classes of systems, rather then at the kind of specifics that have typically been explored in the Game of Life. And within ruliology, the Game of Life is in a sense nothing special; it’s just one of many “class 4” 2D cellular automaton (in my numbering scheme, it’s the 2-color 9-neighbor cellular automaton with outer totalistic code 224).

\n

My own investigations of cellular automata have particularly focused in 1D than 2D examples. And I think that’s been crucial to many of the scientific discoveries I’ve made. Because somehow one learns so much more by being able to see at a glance the history of a system, rather than just seeing frames in a video go by. With a class 4 2D rule like the Game of Life, one can begin to approach this by including “trails” of what’s previously happened, and we’ll often use this kind of visualization in what follows:

\n
\n
\n

\n

We can get a more complete view of history by looking at the whole (2+1)-dimensional “spacetime history”—though then we’re confronted with 3D forms that are often somewhat difficult for our human visual system to parse:

\n
\n
\n

\n

But taking a slice through this 3D form we get “silhouette” pictures that turn out to look remarkably similar to what I generated in large quantities starting in the early 1980s across many 1D cellular automata:

\n
\n
\n

\n

Such pictures—with their complex forms—highlight the computational irreducibility that’s close at hand even in the Game of Life. And indeed it’s the presence of such computational irreducibility that ultimately makes possible the richness of engineering that can be done in the Game of Life. But in actually doing that engineering—and in setting up structures and processes that behave in understandable and “technologically useful” ways—we need to keep the computational irreducibility “bottled up”. And in the end, we can think of the path of engineering innovation in the Game of Life as like an effort to navigate through an ocean of computational irreducibility, finding “islands of reducibility” that achieve the purposes we want.

\n

What’s Been Made in the Game of Life?

\n

Most of the structures of “engineering interest” in the Game of Life are somehow persistent. The simplest are structures that just remain constant, some small examples being:

\n
\n
\n

\n

And, yes, structures in the Game of Life have been given all sorts of (usually whimsical) names, which I’ll use here. (And, in that vein, structures in the Game of Life that remain constant are normally called “still lifes”.)

\n

Beyond structures that just remain constant, there are “oscillators” that produce periodic patterns:

\n
\n
\n

\n

We’ll be discussing oscillators at much greater length below, but here are a few examples (where now we’re including a visualization that shows “trails”):

\n
\n
\n

\n

Next in our inventory of classes of structures come “gliders” (or in general “spaceships”): structures that repeat periodically but move when they do so. A classic example is the basic glider, which takes on the same form every 4 steps—after moving 1 cell horizontally and 1 cell vertically:

\n
\n
\n

\n

Here are a few small examples of such “spaceship”-style structures:

\n
\n
\n

\n

Still lifes, oscillators and spaceships are most of what one sees in the “ash” that survives from typical random initial conditions. And for example the end result (after 1103 steps) from the evolution we saw in the previous section consists of:

\n
\n
\n

\n

The structures we’ve seen so far were all found not long after the Game of Life was invented; indeed, pretty much as soon it was simulated on a computer. But one feature that they all share is that they don’t systematically grow; they always return to the same number of black cells. And so one of the early surprises (in 1970) was the discovery of a “glider gun” that shoots out a glider every 30 steps forever:

\n
\n
\n

\n
\n
\n

\n

Something that gives a sense of progress that’s been made in Game-of-Life “technology” is that a “more efficient” glider gun—with period 15—was discovered, but only in 2024, 54 years after the previous one:

\n
\n
\n

\n

Another kind of structure that was quickly discovered in the early history of the Game of Life is a “puffer”—a “spaceship” that “leaves debris behind” (in this case every 128 steps):

\n
\n
\n

\n
\n
\n

\n

But given these kinds of “components”, what can one build? Something constructed very early was the “breeder”, that uses streams of gliders to create glider guns, that themselves then generate streams of gliders:

\n
\n
\n

\n
\n
\n

\n
\n
\n

\n

The original pattern covers about a quarter million cells (with 4060 being black). Running it for 1000 steps we see it builds up a triangle containing a quadratically increasing number of gliders:

\n
\n
\n

\n

OK, but knowing that it’s in principle possible to “fill a growing region of space”, is there a more efficient way to do it? The surprisingly simple answer, as discovered in 1993, is yes:

\n
\n
\n

\n
\n
\n

\n

So what other kinds of things can be built in the Game of Life? Lots—even from the simple structures we’ve seen so far. For example, here’s a pattern that was constructed to compute the primes

\n
\n
\n

\n

emitting a “lightweight spaceship” at step 100 + 120n only if n is prime. It’s a little more obvious how this works when it’s viewed “in spacetime”; in effect it’s running a sieve in which all multiples of all numbers are instantiated as streams of gliders, which knock out spaceships generated at non-prime positions:

\n
\n
\n

\n

If we look at the original pattern here, it’s just made up of a collection of rather simple structures:

\n
\n
\n

\n

And indeed structures like these have been used to build all sorts of things, including for example Turing machine emulators—and also an emulator for the Game of Life itself, with this 499×499 pattern corresponding to a single emulated Life cell:

\n
\n
\n

\n

Both these last two patterns were constructed in the 1990s—from components that had been known since the early 1970s. And—as we can see—they’re large (and complicated). But do they need to be so large? One of the lessons of the Principle of Computational Equivalence is that in the computational universe there’s almost always a way to “do just as much, but with much less”. And indeed in the Game of Life many, many discoveries along these lines have been made in the past few decades.

\n

As we’ll see, often (but not always) these discoveries built on “new devices” and “new mechanisms” that were identified in the intervening years. A long series of such “devices” and “mechanisms” involved handling “signals” associated with streams of gliders. For example, the “glider pusher” (from 1993) has the somewhat subtle (but useful) effect of “pushing” a glider by one cell when it goes past:

\n
\n
\n

\n

Another example (actually already known in 1971, and based on the period-15 “pentadecathlon” oscillator) is a glider reflector:

\n
\n
\n

\n

But a feature of this glider pusher and glider reflector is that they work only when both the glider and the stationary object are in a particular phase with respect to their periods. And this makes it very tricky to build larger structures out of these that operate correctly (and in many cases it wouldn’t be possible but for the commensurability of the period 30 of the original glider gun, and the period 15 of the glider reflector).

\n

Could glider pushing and glider reflection be done more robustly? The answer turns out to be yes. Though it wasn’t until 2020 that the “bandersnatch” was created—a completely static structure that “pushes” gliders independent of their phase:

\n
\n
\n

\n

Meanwhile, in 2013 the “snark” had been created—which served as a phase-independent glider reflector:

\n
\n
\n

\n

One theme—to which we’ll return later—is that after certain functionality was first built in the Game of Life, there followed many “optimizations”, achieving that functionality more robustly, with smaller patterns, etc. An important methodology has revolved around so-called “hasslers”, which in effect allow one to “mine” small pieces of computational irreducibility, by providing “harnesses” that “rein in” behavior, typically returning patterns to their original states after they’ve done what one wants them to do.

\n

So, for example, here’s a hassler (found, as it happens just on February 8, 2025!) that “harnesses” the first pattern we looked at above (that didn’t stabilize for 1103 steps) into an oscillator with period 80:

\n
\n
\n

\n

And based on this (indeed, later that same day) the most-compact-ever “spaceship gun” was constructed from this:

\n
\n
\n

\n

The Arc of Progress

\n

We’ve talked about some of what it’s been possible to build in the Game of Life over the years. Now I want to talk about how that happened, or, in other words, the “arc of progress” in the Game of Life. And as a first indication of this, we can plot the number of new Life structures that have been identified each year (or, more specifically, the number of structures deemed significant enough to name, and to record in the LifeWiki database or its predecessors):

\n
\n
\n

\n

There’s an immediate impression of several waves of activity. And we can break this down into activity around various common categories of structures:

\n
\n
\n

\n

For oscillators we see fairly continuous activity for five decades, but with rapid acceleration recently. For “spaceships” and “guns” we see a long dry spell from the early 1970s to the 1990s, followed by fairly consistent activity since. And for conduits and reflectors we see almost nothing until sudden peaks of activity, in the mid-1990s and mid-2010s respectively.

\n

But what was actually done to find all these structures? There have basically been two methods: construction and search. Construction is a story of “explicit engineering”—and of using human thought to build up what one wants. Search, on the other hand, is a story of automation—and of taking algorithmically generated (usually large) collections of possible patterns, and testing them to find ones that do what one wants. Particularly in more recent times it’s also become common to interleave these methods, for example using construction to build a framework, and then using search to find specific patterns that implement some feature of that framework.

\n

When one uses construction, it’s like “inventing” a structure, and when one uses search, it’s like “discovering” it. So how much of each is being done in practice? Text mining descriptions of recently recorded structures the result is as follows—suggesting that, at least in recent times, search (i.e. “discovery”) has become the dominant methodology for finding new structures:

\n
\n
\n

\n

When the Game of Life was being invented, it wasn’t long before it was being run on computers—and people were trying to classify the things it could do. Still lifes and simple oscillators showed up immediately. And then—evolving from the (“R pentomino”) initial condition that we used at the beginning here—after 69 steps something unexpected showed up. In between complicated behavior that was hard to describe was a simple free-standing structure that just systematically moved—a “glider”:

\n
\n
\n

\n

Some other moving structures (dubbed “spaceships”) were also observed. But the question arose: could there be a structure that would somehow systematically grow forever? To find it involved a mixture of “discovery” and “invention”. In running from the (“R pentomino”) initial condition lots of things happen. But at step 785 it was noticed that there appeared the following structure:

\n
\n
\n

\n

For a while this structure (dubbed the “queen bee”) behaves in a fairly orderly way—producing two stable “beehive” structures (visible here as vertical columns). But then it “decays” into more complicated behavior:

\n
\n
\n

\n

But could this “discovered” behavior be “stabilized”? The answer was that, yes, if a “queen bee” was combined with two “blocks” it would just repeatedly “shuttle” back and forth:

\n
\n
\n

\n

What about two “queen bees”? Now whenever these collided there was a side effect: a glider was generated—with the result that the whole structure became a glider gun repeatedly producing gliders forever:

\n
\n
\n

\n

The glider gun was the first major example of a structure in the Game of Life that was found—at least in part—by construction. And within a year of it being found in November 1970, two more guns—with very similar methods of operation—had been found:

\n
\n
\n

\n

But then the well ran dry—and no further gun was found until 1990. Pretty much the same thing happened with spaceships: four were found in 1970, but no more were found until 1989. As we’ll discuss later, it was in a sense a quintessential story of computational irreducibility: there was no way to predict (or “construct”) what spaceships would exist; one just had to do the computation (i.e. search) to find out.

\n

It was, however, easier to have incremental success with oscillators—and (as we’ll see) pretty much every year an oscillator with some new period was found, essentially always by search. Some periods were “long holdouts” (for example the first period-19 oscillator was found only in 2023), once again reflecting the effects of computational irreducibility.

\n

Glider guns provided a source of “signals” for Life engineering. But what could one do with these signals? An important idea—that first showed up in the “breeder” in 1971—was “glider synthesis”: the concept that combinations of gliders could produce other structures. So, for example, it was found that three carefully-arranged gliders could generate a period-15 (“pentadecathlon”) oscillator:

\n
\n
\n

\n

It was also soon found that 8 gliders could make the original glider gun (the breeder made glider guns by a slightly more ornate method). And eventually there developed the conjecture that any structure that could be synthesized from gliders would need at most 15 gliders, carefully arranged at positions whose values effectively encoded the object to be constructed.

\n

By the end of the 1970s a group of committed Life enthusiasts remained, but there was something of a feeling that “the low-hanging fruit had been picked”, and it wasn’t clear where to go next. But after a somewhat slow decade, work on the Game of Life picked up substantially towards the end of the 1980s. Perhaps my own work on cellular automata (and particularly the identification of class 4 cellular automata, of which the Game of Life is a 2D example) had something to do with. And no doubt it also helped that the fairly widespread availability of faster (“workstation class”) computers now made it possible for more people to do large-scale systematic searches. In addition, when the web arrived in the early 1990s it let people much more readily share results—and had the effect of greatly expanding and organizing the community of Life enthusiasts.

\n

In the 1990s—along with more powerful searches that found new spaceships and guns—there was a burst of activity in constructing elaborate “machines” out of existing known structures. The idea was to start from a known type of “machine” (say a Turing machine), then to construct a Life implementation of it. The constructions were made particularly ornate by the need to make the phases of gliders, guns, etc. appropriately correspond. Needless to say, any Life configuration can be thought of as doing some computation. But the “machines” that were constructed were ones whose “purpose” and “functionality” was already well established in general computation, independent of the Game of Life.

\n

If the 1990s saw a push towards “construction” in the Game of Life, the first decade of the 2000s saw a great expansion of search. Increasingly powerful cloud and distributed computing allowed “censuses” to be created of structures emerging from billions, then trillions of initial conditions. Mostly what was emphasized was finding new instances of existing categories of objects, like oscillators and spaceships. There were particular challenges, like (as we’ll discuss below) finding oscillators of any period (finally completely solved in 2023), or finding spaceships with different patterns of motion. Searches did yield what in censuses were usually called “objects with unusual growth”, but mostly these were not viewed as being of “engineering utility”, and so were not extensively studied (even though from the point of the “science of the Game of Life” they are, for example, perhaps the most revealing examples of computational irreducibility).

\n

As had happened throughout the history of the Game of Life, some of the most notable new structures were created (sometimes over a long period of time) by a mixture of construction and search. For example, the “stably-reflect-gliders-without-regard-to-phase” snark—finally obtained in 2013—was the result of using parts of the (ultimately unstable) “simple-structures” construction from around 1998

\n
\n
\n

\n

and combining them with a hard-to-explain-why-it-works “still life” found by search:

\n
\n
\n

\n

Another example was the “Sir Robin knightship”—a spaceship that moves like a chess knight 2 cells down and 1 across. In 2017 a spaceship search found a structure that in 6 steps has many elements that make a knight move—but then subsequently “falls apart”:

\n
\n
\n

\n

But the next year a carefully orchestrated search was able to “find a tail” that “adds a fix” to this—and successfully produces a final “perfect knightship”:

\n
\n
\n

\n

By the way, the idea that one can take something that “almost works” and find a way to “fix it” is one that’s appeared repeatedly in the engineering history of the Game of Life. At the outset, it’s far from obvious that such a strategy would be viable. But the fact that it is seems to be similar to the story of why both biological evolution and machine learning are viable—which, as I’ve recently discussed, can be viewed as yet another consequence of the phenomenon of computational irreducibility.

\n

One thing that’s happened many times in the history of the Game of Life is that at some point some category of structure—like a conduit—is identified, and named. But then it’s realized that actually there was something that could be seen as an instance of the same category of structure found much earlier, though without the clarity of the later instance, its significance wasn’t recognized. For example, in 1995 the “Herschel conduit” that moves a from one position to another (here in 64 steps) was discovered (by a search):

\n
\n
\n

\n

But then it was realized that—if looked at correctly—a similar phenomenon had actually already been seen in 1972, in the form of a structure that in effect takes if it is present, and “moves it” (in 28 steps) to a at a different position (albeit with a certain amount of “containable” other activity):

\n
\n
\n

\n

Looking at the plots above of the number of new structures found per year we see the largest peak after 2020. And, yes, it seems that during the pandemic people spent more time on the Game of Life—in particular trying to fill in tables of structures of particular types, for example, with each possible period.

\n

But what about the human side of engineering in the Game of Life? The activity brought in people from many different backgrounds. And particularly in earlier years, they often operated quite independently, and with very different methods (some not even using a computer). But if we look at all “recorded structures” we can look at how many structures in total different people contributed, and when they made these contributions:

\n
\n
\n

\n

Needless to say—given that we’re dealing with an almost-60-year span—different people tend to show up as active in different periods. Looking at everyone, there’s a roughly exponential distribution to the number of (named) structures they’ve contributed. (Though note that several of the top contributors shown here found parametrized collections of structures and then recorded many instances.)

\n

The Example of Oscillators

\n

As a first example of systematic “innovation history” in the Game of Life let’s talk about oscillators. Here are the periods of oscillators that were found up to 1980:

\n
\n
\n

\n

As of 1980, many periods were missing. But in fact all periods are possible—though it wasn’t until 2023 that they were all filled in:

\n
\n
\n

\n

And if we plot the number of distinct periods (say below 60) found by a given year, we can get a first sense of the “arc of progress” in “oscillator technology” in the Game of Life:

\n
\n
\n

\n

Finding an oscillator of a given period is one thing. But how about the smallest oscillator of that period? We can be fairly certain that not all of these are known, even for periods below 30. But here’s a plot that shows when the progressive “smallest so far” oscillators were found for a given period (red indicates the first instance of a given period; blue the best result to date):

\n
\n
\n

\n

And here’s the corresponding plot for all periods up to 100:

\n
\n
\n

\n

But what about the actual reduction in size that’s achieved? Here’s a plot for each oscillator period showing the sequence of sizes found—in effect the “arc of engineering optimization” that’s achieved for that period:

\n
\n
\n

\n
\n
\n

\n

So what are the actual patterns associated with these various oscillators? Here are some results (including timelines of when the patterns were found):

\n
\n
\n

\n

But how were these all found? The period-2 “blinker” was very obvious—showing up in evolution from almost any random initial condition. Some other oscillators were also easily found by looking at the evolution of particular, simple initial conditions. For example, a line of 10 black cells after 3 steps gives the period-15 “pentadecathlon”. Similarly, the period-3 “pulsar” emerges from a pair of length-5 blocks after 22 steps:

\n
\n
\n

\n

Many early oscillators were found by iterative experimentation, often starting with stable “still life” configurations, then perturbing them slightly, as in this period-4 case:

\n
\n
\n

\n

Another common strategy for finding oscillators (that we’ll discuss more below) was to take an “unstable” configuration, then to “stabilize” it by putting “robust” still lifes such as the “block” or the “eater around it—yielding results like:

\n
\n
\n

\n

For periods that can be formed as LCMs of smaller periods one “construction-oriented” strategy has been to take oscillators with appropriate smaller periods, and combine them, as in:

\n
\n
\n

\n

In general, many different strategies have been used, as indicated for example by the sequence of period-3 oscillators that have been recorded over the years (where “smallest-so-far” cases are highlighted):

\n
\n
\n

\n

By the mid-1990s oscillators of many periods had been found. But there were still holdouts, like period 19 and for example pretty much all periods between 61 and 70 (except, as it happens, 66). At the time, though, all sorts of complicated constructions—say of prime generators—were nevertheless being done. And in 1996 it was figured out that one could in effect always “build a machine” (using only structures that had already been found two decades earlier) that would serve as an oscillator of any (sufficiently large) period (here 67)—effectively by “sending a signal around a loop of appropriate size”:

\n
\n
\n

\n

But by the 2010s, with large numbers of fast computers becoming available, there was again an emphasis on pure random search. A handful of highly efficient programs were developed, that could be run on anyone’s machine. In a typical case, a search might consist of starting, say, from a trillion randomly chosen initial conditions (or “soups”), identifying new structures that emerge, then seeing whether these act, for example, as oscillators. Typically any new discovery was immediately reported in online forums—leading to variations of it being tried, and new follow-on results often being reported within hours or days.

\n

Many of the random searches started just from 16×16 regions of randomly chosen cells (or larger regions with symmetries imposed). And in a typical manifestation of computational irreducibility, many surprisingly small and “random-looking” (at least up to symmetries) results were found. So, for example, here’s the sequence of recorded period-16 oscillators with smaller-than-before cases highlighted:

\n
\n
\n

\n

Up through the 1990s results were typically found by a mixture of construction and small-scale search. But in 2016, results from large-scale random searches (sometimes symmetrical, sometimes not) started to appear.

\n

The contrast between construction and search could be dramatic, like here for period 57:

\n
\n
\n

\n

One might wonder whether there could actually be a systematic, purely algorithmic way to find, say, possible oscillators of a given period. And indeed for one-dimensional cellular automata (as I noted in 1984), it turns out that there is. Say one considers blocks of cells of width w. Which block can follow which other is determined by a de Bruijn graph, or equivalently, a finite state machine. If one is going to have a pattern with period p, all blocks that appear in it must also be periodic with period p. But such blocks just form a subgraph of the overall de Bruijn graph, or equivalently, form another, smaller, finite state machine. And then all patterns with period p must correspond to paths through this subgraph. But how long are the blocks one has to consider?

\n

In 1D cellular automata, it turns out that there’s an upper bound of 22p. But for 2D cellular automata—like the Game of Life—there is in general no such upper bound, a fact related to the undecidability of the 2D tiling problem. And the result is that there’s no complete, systematic algorithm to find oscillators in a general 2D cellular automaton, or presumably in the Game of Life.

\n

But—as was actually already realized in the mid-1990s—it’s still possible to use algorithmic methods to “fill in” pieces of patterns. The idea is to define part of a pattern of a given period, then use this as a constraint on filling in the rest of it, finding “solutions” that satisfy the constraint using SAT-solving techniques. In practice, this approach has more often been used for spaceships than for oscillators (not least because it’s only practical for small periods). But one feature of it is that it can generate fairly large patterns with a given period.

\n

Yet another method that’s been tried has been to generate oscillators by colliding gliders in many possible ways. But while this is definitely useful if one’s interested in what can be made using gliders, it doesn’t seem to have, for example, allowed people to find much in the way of interesting new oscillators.

\n

Modularity

\n

In traditional engineering a key strategy is modularity. Rather than trying to build something “all in one go”, the idea is to build a collection of independent subsystems, from which the whole system can then be assembled. But how does this work in the Game of Life? We might imagine that to identify the modular parts of a system, we’d have to know the “process” by which the system was put together, and the “intent” involved. But because in the Game of Life we’re ultimately just dealing with pure patterns of bits we can in effect just as well “come in at the end” and algorithmically figure out what pieces are operating as separate, modular parts.

\n

So how can we do this? Basically what we want to find out is which parts of a pattern “operate independently” at a given step, in the sense that these parts don’t have any overlap in the cells they affect. Given that in the rules for the Game of Life a particular cell can affect any of the 9 cells in its neighborhood, we can say that black cells can only have “overlapping effects” if they are at most cell units apart. So then we can draw a “nearest neighbor graph” that shows which cells are connected in this sense:

\n
\n
\n

\n

But what about the whole evolution? We can draw what amounts to a causal graph that shows the causal connections between the “independent modular parts” that exist at each step:

\n
\n
\n

\n

And given this, we can summarize the “modular structure” of this particular oscillator by the causal graph:

\n
\n
\n

\n

Ultimately all that matters in the “overall operation” of the oscillator is the partial ordering defined by this graph. Parts that appear “horizontally separated” (or, more precisely, in antichains, or in physics terminology, spacelike separated) can be generated independently and in parallel. But parts that follow each other in the partial order need to be generated in that order (i.e. in physics terms, they are timelike separated).

\n

As another example, let’s look at graphs for the various oscillators of period 16 that we showed above:

\n
\n
\n

\n

What we see is that the early period-16 oscillators were quite modular, and had many parts that in effect operated independently. But the later, smaller ones were not so modular. And indeed the last one shown here had no parts that could operate independently; the whole pattern had to be taken together at each step.

\n

And indeed, what we’ll often see is that the more optimized a structure is, the less modular it tends to be. If we’re going to construct something “by hand” we usually need to assemble it in parts, because that’s what allows us to “understand what we’re doing”. But if, for example, we just find a structure in a search, there’s no reason for it to be “understandable”, and there’s no reason for it to be particularly modular.

\n

Different steps in a given oscillator can involve different numbers of modular parts. But as a simple way to assess the “modularity” of an oscillator, we can just ask for the average number of parts over the course of one period. So as an example, here are the results for period-30 oscillators:

\n
\n
\n

\n

Later, we’ll discuss how we can use the level of modularity to assess whether a pattern is likely to have been found by a search or by construction. But for now, this shows how the modularity index has varied over the years for the best known progressively smaller oscillators of a given period—with the main conclusion being that as the oscillators get optimized for size, so also their modularity index tends to decrease:

\n
\n
\n

\n
\n
\n

\n

Gliders & Spaceships

\n

Oscillators are structures that cycle but do not move. “Gliders” and, more generally, “spaceships” are structures that move every time they cycle. When the Game of Life was first introduced, four examples of these (all of period 4) were found almost immediately (the last one being the result of trying to extend the one before it):

\n
\n
\n

\n

Within a couple of years, experimentation had revealed two variants, with periods 12 and 20 respectively, involving additional structures:

\n
\n
\n

\n

But after that, for nearly two decades, no more spaceships were found. In 1989, however, a systematic method for searching was invented, and in the years since, a steady stream of new spaceships have been found. A variety of different periods have been seen

\n
\n
\n

\n

as well as a variety of speeds (and three different angles):

\n
\n
\n

\n

The forms of these spaceships are quite diverse:

\n
\n
\n

\n

Some are “tightly integrated”, while some have many “modular pieces”, as revealed by their causal graphs:

\n
\n
\n

\n

Period-96 spaceships provide an interesting example of the “arc of progress” in the Game of Life. Back in 1971, a systematic enumeration of small polyominoes was done, looking for one that could “reproduce itself”. While no polyomino on its own seemed to do this, a case was found where part of the pattern produced after 48 steps seemed to reappear repeatedly every 48 steps thereafter:

\n
\n
\n

\n

One might expect this repeated behavior to continue forever. But in a typical manifestation of computational irreducibility, it doesn’t, instead stopping its “regeneration” after 24 cycles, and then reaching a steady state (apart from “radiated” gliders) after 3911 steps:

\n
\n
\n

\n

But from an engineering point of view this kind of complexity was just viewed as a nuisance, and efforts were made to “tame” and avoid it.

\n

Adding just one still-life block to the so-called “switch engine

\n
\n
\n

\n

produces a structure that keeps generating a “periodic wake” forever:

\n
\n
\n

\n

But can this somehow be “refactored” as a “pure spaceship” that doesn’t “leave anything behind”? In 1991 it was discovered that, yes, there was an arrangement of 13 switch engines that could successfully “clean up behind themselves”, to produce a structure that would act as a spaceship with period 96:

\n
\n
\n

\n

But could this be made simpler? It took many years—and tests of many different configurations—but in the end it was found that just 2 switch engines were sufficient:

\n
\n
\n

\n

Looking at the final pattern in spacetime gives a definite impression of “narrowly contained complexity”:

\n
\n
\n

\n

What about the causal graphs? Basically these just decrease in “width” (i.e. number of independent modular parts) as the number of engines decreases:

\n
\n
\n

\n

Like many other things in Game-of-Life engineering, both search and construction have been used to find spaceships. As an extreme example of construction let’s talk about the case of spaceships with speed 31/240. In 2013, an analog of the switch engine above was found—which “eats” blocks 31 cells apart every 240 steps:

\n
\n
\n

\n

But could this be turned into a “self-sufficient” spaceship? A year later an almost absurdly large (934852×290482) pattern was constructed that did this—by using streams of gliders and spaceships (together with dynamically assembled glider guns) to create appropriate blocks in front, and remove them behind (along with all the “construction equipment” that was used):

\n

934852×290482 pattern

\n

By 2016, a pattern with about 700× less area had been constructed. And now, just a few weeks ago, a pattern with 1300× less area (11974×45755) was constructed:

\n
\n
\n

\n

And while this is still huge, it’s still made of modular pieces that operate in an “understandable” way. No doubt there’s a much smaller pattern that operates as a spaceship of the same speed, but—computational irreducibility being what it is—we have no idea how large the pattern might be, or how we might efficiently search for it.

\n

Glider Guns

\n

What can one engineer in the Game of Life? A crucial moment in the development of Game-of-Life engineering was the discovery of the original glider gun in 1970. And what was particularly important about the glider gun is that it was a first example of something that could be thought of as a “signal generator”—that one could imagine would allow one to implement electrical-engineering-style “devices” in the Game of Life.

\n

The original glider gun produces gliders every 30 steps, in a sense defining a “clock speed” of 1/30 for any “circuit” driven by it. Within a year after the original glider gun, two other “slower” glider guns had also been discovered

\n
\n
\n

\n
\n
\n

\n

both working on similar principles, as suggested by their causal graphs:

\n
\n
\n

\n

It wasn’t until 1990 that any additional “guns” were found. And in the years since, a sequence of guns have been found, with a rather wide range of distinct periods:

\n
\n
\n

\n

Some of the guns found have very long periods:

\n
\n
\n

\n

But as part of the effort to do constructions in the 1990s a gun was constructed that had overall period 210, but which interwove multiple glider streams to ultimately produce gliders every 14 steps (which is the maximum rate possible, while avoiding interference of successive gliders):

\n
\n
\n

\n

Over the years, a whole variety of different glider guns have been found. Some are in effect “thoroughly controlled” constructions. Others are more based on some complex process that is reined in to the point where it just produces a stream of gliders and nothing more:

\n
\n
\n

\n

An example of a somewhat surprising glider gun—with the shortest “true period” known—was found in 2024:

\n
\n
\n

\n

The causal graph for this glider gun shows a mixture of irreducible “search-found” parts, together with a collection of “well-known” small modular parts:

\n
\n
\n

\n

By the way, in 2013 it was actually found possible to extend the construction for oscillators of any period to a construction for guns of any period (or at least any period above 78):

\n
\n
\n

\n

In addition to having streams of gliders, it’s also sometimes been found useful to have streams of other “spaceships”. Very early on, it was already known that one could create small spaceships by colliding gliders:

\n
\n
\n

\n

But by the mid-1990s it had been found that direct “spaceship guns” could also be made—and over the years smaller and smaller “optimized” versions have been found:

\n
\n
\n

\n

The last of these—from just last month—has a surprisingly simple structure, being built from components that were already known 30 years ago, and having a causal graph that shows very modular construction:

\n
\n
\n

\n

Building from History

\n

We’ve talked about some of the history of how specific patterns in the Game of Life were found. But what about the overall “flow of engineering progress”? And, in particular, when something new is found, how much does it build on what has been found before? In real-world engineering, things like patent citations potentially give one an indication of this. But in the Game of Life one can approach the question much more systematically and directly, just asking what configurations of bits from older patterns are used in newer ones.

\n

As we discussed above, given a pattern such as

\n
\n
\n

\n

we can pick out its “modular parts”, here rotated to canonical orientations:

\n
\n
\n

\n

Then we can see if these parts correspond to (any phase of) previously known patterns, which in this case they all do:

\n
\n
\n

\n

So now for all structures in the database we can ask what parts they involve. Here’s a plot of the overall frequencies of these parts:

\n
\n
\n

\n

It’s notable that the highest-ranked part is a so-called “eater” that’s often used in constructions, but occurs only quite infrequently in evolution from random initial conditions. It’s also notable that (for no particularly obvious reason) the frequency of the nth most common structure is roughly 1/n.

\n

So when were the various structures that appear here first found? As this picture shows, most—but not all—were found very early in the history of the Game of Life:

\n
\n
\n

\n

In other words, most of the parts used in structures from any time in the history of the Game of Life come from very early in its history. Or, in effect, structures typically go “back to basics” in the parts they use.

\n

Here’s a more detailed picture, showing the relative amount of use of each part in structures from each year:

\n
\n
\n

\n

There are definite “fashions” to be seen here, with some structures “coming into fashion” for a while (sometimes, but not always, right after they were first found), and then dropping out.

\n

One might perhaps imagine that smaller parts (i.e. ones with smaller areas) would be more popular than larger ones. But plotting areas of parts against their rank, we see that there are some large parts that are quite common, and some small ones that are rare:

\n
\n
\n

\n

We’ve seen that many of the most popular parts overall are ones that were found early in the history of the Game of Life. But plenty of distinct modular parts were also found much later. This shows the number of distinct new modular parts found across all patterns in successive years:

\n
\n
\n

\n

Normalizing by the number of new patterns found each year, we see a general gradual increase in the relative number of new modular parts, presumably reflecting the greater use of search in finding patterns, or components of patterns:

\n
\n
\n

\n

But how important have these later-found modular parts been? This shows the total rate at which modular parts found in a given year were subsequently used—and what we see, once again, is that parts found early are overwhelmingly the ones that are subsequently used:

\n
\n
\n

\n

A somewhat complementary way to look at this is to ask of all patterns found in a given year, how many are “purely de novo”, in the sense that they use no previously found modular parts (as indicated in red), and how many use previously found parts:

\n
\n
\n

\n

A cumulative version of this makes it clear that in early years most patterns are purely de novo, but later on, there’s an increasing amount of “reuse” of previously found parts—or, in other words, in later years the “engineering history” is increasingly important:

\n
\n
\n

\n

It should be said, however, that if one wants the full story of “what’s being used” it’s a bit more nuanced. Because here we’re always treating each modular part of each pattern as a separate entity, so that we consider any given pattern to “depend” only on base modular parts. But “really” it could depend on another whole structure, itself built of many modular parts. And in what we’re doing here, we’re not tracking that hierarchy of dependencies. Were we to do so, we would likely be able to see more complex “technology stacks” in the Game of Life. But instead we’re always “going down to the primitives”. (If we were dealing with electronics it’d be like asking “What are the transistors and capacitors that are being used?”, rather than “What is the caching architecture, or how is the floating point unit set up?”)

\n

OK, but in terms of “base modular parts” a simple question to ask is how many get used in each pattern. This shows the number of (base) modular parts in patterns found in each year:

\n
\n
\n

\n

There are always a certain number of patterns that just consist of a single modular part—and, as we saw above, that was more common earlier in the history of the Game of Life. But now we also see that there have been an increasing number of patterns that use many modular parts—typically reflecting a higher degree of “construction” (rather than search) going on.

\n

By the way, for comparison, these plots show the total areas and the numbers of (black) cells in patterns found in each year; both show increases early on, but more or less level off by the 1990s:

\n
\n
\n

\n

But, OK, if we look across all patterns in the database, how many parts do they end up using? Here’s the overall distribution:

\n
\n
\n

\n

At least for a certain range of numbers of parts, this falls roughly exponentially, reflecting the idea that it’s been exponentially less likely for people to come up with (or find) patterns that have progressively larger numbers of distinct modular parts.

\n

How has this changed over time? This shows a cumulative plot of the relative frequencies with which different numbers of modular parts appear in patterns up to a given year

\n
\n
\n

\n

indicating that over time the distribution of the number of modular parts has gotten progressively broader—or, in other words, as we’ve seen in other ways above, more patterns make use of larger numbers of modular parts.

\n

We’ve been looking at all the patterns that have been found. But we can also ask, say, just about oscillators. And then we can ask, for example, which oscillators (with which periods) contain which others, as in:

\n
\n
\n

\n

And looking at all known oscillators we can see how common different “oscillator primitives” are in building up other oscillators:

\n
\n
\n

\n

We can also ask in which year “oscillator primitives” at different ranks were found. Unlike in the case of all structures above, we now see that some oscillator primitives that were found only quite recently appear at fairly high ranks—reflecting the fact that in this case, once a primitive has been found, it’s often immediately useful in making oscillators that have multiples of its period:

\n
\n
\n

\n

Lifetime Hacking and Die Hards

\n

We can think of almost everything we’ve talked about so far as being aimed at creating structures (like “clocks” and “wires”) that are recognizably useful for building traditional “machine-like” engineering systems. But a different possible objective is to find patterns that have some feature we can recognize, whether with obvious immediate “utility” or not. And as one example of this we can think about finding so-called “die hard” patterns that live as long as possible before dying out.

\n

The phenomenon of computational irreducibility tells us that even given a particular pattern we can’t in general “know in advance” how long it’s going to take to die out (or if it ultimately dies out at all). So it’s inevitable that the problem of finding ultimate die-hard patterns can be unboundedly difficult, just like analogous problems for other computational systems (such as finding so-called “busy beavers” in Turing machines).

\n

But in practice one can use both search and construction techniques to find patterns that at least live a long time (even if not the very longest possible time). And as an example, here’s a very simple pattern (found by search) that lives for 132 steps before dying out (the “puff” at the end on the left is a reflection of how we’re showing “trails”; all the actual cells are zero at that point):

\n
\n
\n

\n

Searching nearly 1016 randomly chosen 16×16 patterns (out of a total of ≈ 1077 possible such patterns), the longest lifetime found is 1413 steps—achieved with a rather random-looking initial pattern:

\n
\n
\n

\n

But is this the best one can do? Well, no. Just consider a block and a spaceship n cells apart. It’ll take 2n steps for them to collide, and if the phases are right, annihilate each other:

\n
\n
\n

\n

So by picking the separation n to be large enough, we can make this configuration “live as long as we want”. But what if we limit the size of the initial pattern, say to 32×32? In 2022 the following pattern was constructed:

\n
\n
\n

\n

And this pattern is carefully set up so that after 30,274 steps, everything lines up and it dies out, as we can see in the (vertically foreshortened) spacetime diagram on the left:

\n
\n
\n

\n
\n
\n

\n

And, yes, the construction here clearly goes much further than search was able to reach. But can we go yet further? In 2023 a 116×86 pattern was constructed

\n
\n
\n

\n

that it was proved eventually dies out, but only after the absurdly large number of 17↑↑↑3 steps (probably even much larger than the number of emes in the ruliad), as given by:

\n
\n
\n

\n

or

\n
\n
\n

\n

The Comparison with Adaptive Evolution

\n

There are some definite rough ways in which technology development parallels biological evolution. Both involve the concept of trying out possibilities and building on ones that work. But technology development has always ultimately been driven by human effort, whereas biological evolution is, in effect, a “blind” process, based on the natural selection of random mutations. So what happens if we try to apply something like biological evolution to the Game of Life? As an example, let’s look at adaptive evolution that’s trying to maximize finite lifetime based on making a sequence of random point mutations within an initially random 16×16 pattern. Most of those mutations don’t give patterns with larger (finite) lifetimes, but occasionally there’s a “breakthrough” and the lifetime achieved so far jumps up:

\n
\n
\n

\n

The actual behaviors corresponding to the breakthroughs in this case are:

\n
\n
\n

\n

And here are some other outcomes from adaptive evolution:

\n
\n
\n

\n

In almost all cases, a limited number of steps of adaptive evolution do succeed in generating patterns with fairly long finite lifetimes. But the behavior we see typically shows no “readily understandable mechanisms”—and no obviously separable modular parts. And instead—just like in my recent studies of both biological evolution and machine learning—what we get are basically “lumps of irreducible computation” that “just happen” to show what we’re looking for (here, long lifetime).

\n

Invented or Discovered? Made for a Purpose at All?

\n

Let’s say we’re presented with an array of cells that’s an initial condition for the Game of Life. Can we tell “where it came from”? Is it “just arbitrary” (or “random”)? Or was it “set up for a purpose”? And if it was “set up for a purpose”, was it “invented” (and “constructed”) for that purpose, or was it just “discovered” (say by a search) to fulfill that purpose?

\n

Whether one’s dealing with archaeology, evolutionary biology, forensic science, the identification of alien intelligence or, for that matter, theology, the question of whether something “was set up for a purpose” is a philosophically fraught one. Any behavior one sees one can potentially explain either in terms of the mechanism that produces it, or in terms of what it “achieves”. Things get a little clearer if we have a particular language for describing both mechanisms and purposes. Then we can ask questions like: “Is the behavior we care about more succinctly described in terms of its mechanism or its purpose?” So, for example, “It behaves as a period-15 glider gun” might be an adequate purpose-oriented description, that’s much shorter than a mechanism-oriented description in terms of arrangements of cells.

\n

But what is the appropriate “lexicon of purposes” for the Game of Life? In effect, that’s a core question for Game-of-Life engineering. Because what engineering—and technology in general—is ultimately about is taking whatever raw material is available (whether from the physical world, or from the Game of Life) and somehow fashioning it into something that aligns with human purposes. But then we’re back to what counts as a valid human purpose. How deeply does the purpose have to connect in to everything we do? Is it, for example, enough for something to “look nice”, or is that not “utilitarian enough”? There aren’t absolute answers to these questions. And indeed the answers can change over time, as new uses for things are discovered (or invented).

\n

But for the Game of Life we can start with some of the “purposes” we’ve discussed here—like “be an oscillator of a certain period”, “reflect gliders”, “generate the primes” or even just “die after as long as possible”. Let’s say we just start enumerating possible initial patterns, either randomly, or exhaustively. How often will we come across patterns that “achieve one of these purposes”? And will it “only achieve that purpose” or will it also “do extra stuff” that “seems irrelevant”?

\n

As an example, consider enumerating all possible 3×3 patterns of cells. There are altogether 29 = 512 such patterns, or just 102 after reducing by symmetries. Of these, 79 evolve to purely static patterns, while 17 evolve to patterns of period 2. But only one (in two phases) is “immediately of period 2”—and so might reasonably be immediately interpreted as having been “created for the purpose of being of period 2”:

\n
\n
\n

\n

Other patterns can take a while to “become period 2”, but then at least give “pure period-2 objects”. And for example this one can be interpreted as being the smallest precursor, and taking the least time, to reach the period-2 object it produces:

\n
\n
\n

\n

There are other cases that “get to the same place” but seem to “wander around” doing so, and therefore don’t seem as convincing as having been “created for the purpose of making a period-2 oscillator”:

\n
\n
\n

\n

Then there are much more egregious cases. Like

\n
\n
\n

\n

which after 173 steps gives

\n
\n
\n

\n

but only after going through all sorts of complicated intermediate behavior

\n
\n
\n

\n

that definitely doesn’t make it look like it’s going “straight to its purpose” (unless perhaps its purpose is to produce that final pattern from the smallest initial precursor, etc.).

\n

But, OK. Let’s imagine we have a pattern that “goes straight to” some “recognizable purpose” (like being an oscillator of a certain period). The next question is: was that pattern explicitly constructed with an understanding of how it would achieve its purpose, or was it instead “blindly found” by some kind of search?

\n

As an example, let’s look at some period-9 oscillators:

\n
\n
\n

\n

One like

\n
\n
\n

\n

seems like it must have been constructed out of “existing parts”, while one like

\n
\n
\n

\n

seems like it could only plausibly have been found by a search.

\n

Spacetime views don’t tell us much in these particular cases:

\n
\n
\n

\n

But causal graphs are much more revealing:

\n
\n
\n

\n

They show that in the first case there are lots of “factored modular parts”, while in the second case there’s basically just one “irreducible blob” with no obvious separable parts. And we can view this as an immediate signal for “how human” each pattern is. In a sense it’s a reflection of the computational boundedness of our minds. When there are factored modular parts that interact fairly rarely and each behave in a fairly simple way, it’s realistic for us to “get our minds around” what’s going on. But when there’s just an “irreducible blob of activity” we’d have to compute too much and keep too much in mind at once for us to be able to really “understand what’s going on” and for example produce a human-level narrative explanation of it.

\n

If we find a pattern by search, however, we don’t really have to “understand it”; it’s just something we computationally “discover out there in the computational universe” that “happens” to do what we want. And, indeed, as in the example here, it often does what it does in a quite minimal (if incomprehensible) way. Something that’s found by human effort is much less likely to be minimal; in effect it’s at least somewhat “optimized for comprehensibility” rather than for minimality or ease of being found by search. And indeed it will often be far too big (e.g. in terms of number of cells) for any pure exhaustive or random search to plausibly find it—even though the “human-level narrative” for it might be quite short.

\n

Here are the causal graphs for all the period-9 oscillators from above:

\n
\n
\n

\n

Some we can see can readily be broken down into multiple rarely interacting distinct components; others can’t be decomposed in this kind of way. And in a first approximation, the “decomposable” ones seem to be precisely those that were somehow “constructed by human effort”, while the non-decomposable ones seem to be those that were “discovered by searches”.

\n

Typically, the way the “constructions” are done is to start with some collection of known parts, then, by trial and error (sometimes computer assisted) see how these can be fit together to get something that does what one wants. Searches, on the other hand, typically operate on “raw” configurations of cells, blindly going through a large number of possible configurations, at every stage automatically testing whether one’s got something that does what one wants.

\n

And in the end these different strategies reveal themselves in the character of the final patterns they produce, and in the causal graphs that represent these patterns and their behavior.

\n

Principles of Engineering Strategy from the Game of Life

\n

In engineering as it’s traditionally been practiced, the main emphasis tends to be on figuring out plans, and then constructing things based on those plans. Typically one starts from components one has, then tries to figure out how to combine them to incrementally build up what one wants.

\n

And, as we’ve discussed, this is also a way of developing technology in the Game of Life. But as we’ve discussed at length, it’s not the only way. Another way is just to search for whole pieces of technology one wants.

\n

Traditional intuition might make one assume this would be hopeless. But the repeated lesson of my discoveries about simple programs—as well as what’s been done with the Game of Life—is that actually it’s often not hopeless at all, and instead it’s very powerful.

\n

Yes, what you get is not likely to be readily “understandable”. But it is likely to be minimal and potentially quite optimal for whatever it is that it does. I’ve often talked of this approach as “mining from the computational universe”. And over the course of many years I’ve had success with it in all sorts of disparate areas. And now, here, we’ve see in the Game of Life a particularly clean example where search is used alongside construction in developing technology.

\n

It’s a feature of things produced by construction that they are “born understandable”. In effect, they are computationally reducible enough that we can “fit them in our finite minds” and “understand them”. But things found by search don’t have this feature. And most of the time the behavior they’ll show will be full of computational irreducibility.

\n

In both biological evolution and machine learning my recent investigations suggest that most of what we’re seeing are “lumps of irreducible computation” found at random that just “happen to achieve the necessary objectives”. This hasn’t been something familiar in traditional engineering, but it’s something tremendously powerful. And from the examples we’ve seen here in the Game of Life it’s clear that it can often achieve things that seem completely inaccessible by traditional methods based on explicit construction.

\n

At first we might assume that irreducible computation is too unruly and unpredictable to be useful in achieving “understandable objectives”. But if we find just the right piece of irreducible computation then it’ll achieve the objective we want, often in a very minimal way. And the point is that the computational universe is in a sense big enough that we’ll usually be able to find that “right piece of irreducible computation”.

\n

One thing we see in Game-of-Life engineering is something that’s in a sense a compromise between irreducible computation and predictable construction. The basic idea is to take something that’s computationally irreducible, and to “put it in a cage” that constrains it to do what one wants. The computational irreducibility is in a sense the “spark” in the system; the cage provides the control we need to harness that spark in a way that meets our objectives.

\n

Let’s look at some examples. As our “spark” we’ll use the R pentomino that we discussed at the very beginning. On its own, this generates all sorts of complex behavior—that for the most part doesn’t align with typical objectives we might define (though as a “side show” it does happen to generate gliders). But the idea is to put constraints on the R pentomino to make it “useful”.

\n

Here’s a case where we’ve tried to “build a road” for the R pentomino to go down:

\n
\n
\n

\n

And looking at this every 18 steps we see that, at least for a while, the R pentomino has indeed moved down the road. But it’s also generated something of an “explosion”, and eventually this explosion catches up, and the R pentomino is destroyed.

\n

So can we maintain enough control to let the R pentomino survive? The answer is yes. And here, for example, is a period-12 oscillator, “powered” by an R pentomino at its center:

\n
\n
\n

\n

Without the R pentomino, the structure we’ve set up cycles with period 6:

\n
\n
\n

\n

And when we insert the R pentomino this structure “keeps it under control”—so that the only effect it ultimately has is to double the period, t0 12.

\n

Here’s a more dramatic example. Start with a static configuration of four so-called “eaters”:

\n
\n
\n

\n

Now insert two R pentominoes. They’ll start doing their thing, generating what seems like quite random behavior. But the “cage” defined by the “eaters” limits what can happen, and in the end what emerges is an oscillator—that has period 129:

\n
\n
\n

\n

What else can one “make R pentominoes do”? Well, with appropriate harnesses, they can for example be used to “power” oscillators with many different periods:

\n
\n
\n

\n
\n
\n

\n

“Be an oscillator of a certain period” is in a sense a simple objective. But what about more complex objectives? Of course, any pattern of cells in the Game of Life will do something. But the question is whether that something aligns with technological objectives we have.

\n

Generically, things in the Game of Life will behave in computationally irreducible ways. And it’s this very fact that gives such richness to what can be done with the Game of Life. But can the computational irreducibility be controlled—and harnessed for technological purposes? In a sense that is the core challenge of engineering in both the Game of Life, and in the real world. (It’s also rather directly the challenge we face in making use of the computational power of AI, but still adequately aligning it with human objectives.)

\n

As we look at the arc of technological development in the Game of Life we see over the course of half a century all sorts of different advances being made. But will there be an end to this? Will we eventually run out of inventions and discoveries? The underlying presence of computational irreducibility makes it clear that we will not. The only thing that might end is the set of objectives we’re trying to meet. We now know how to make oscillators of any period. And unless we insist on for example finding the smallest oscillator of a given period, we can consider the problem of finding oscillators solved, with nothing more to discover.

\n

In the real world nature and the evolution of the universe inevitably confront us with new issues, which lead to new objectives. In the Game of Life—as in any other abstract area, like mathematics—the issue of defining new objectives is up to us. Computational irreducibility leads to infinite diversity and richness of what’s possible. The issue for us is to figure out what direction we want to go. And the story of engineering and technology in the Game of Life gives us, in effect, a simple model for the issues we confront in other areas of technology, like AI.

\n

Some Personal Backstory

\n

I’m not sure if I made the right decision back in 1981. I had come up with a very simple class of systems and was doing computer experiments on them, and was starting to get some interesting results. And when I mentioned what I was doing to a group of (then young) computer scientists they said “Oh, those things you’re studying are called cellular automata”. Well, actually, the cellular automata they were talking about were 2D systems while mine were 1D. And though that might seem like a technical difference, it has a big effect on one’s impression of what’s going on—because in 1D one can readily see “spacetime histories” that gave an immediate sense of the “whole behavior of the system”, while in 2D one basically can’t.

\n

I wondered what to call my models. I toyed with the term “polymones”—as a modernized nod to Leibniz’s monads. But in the end I decided that I should stick with a simpler connection to history, and just call my models, like their 2D analogs, “cellular automata”. In many ways I’m happy with that decision. Though one of its downsides has been a certain amount of conceptual confusion—more than anything centered around the Game of Life.

\n

People often know that the Game of Life is an example of a cellular automaton. And they also know that within the Game of Life lots of structures (like gliders and glider guns) can be set up to do particular things. Meanwhile, they hear about my discoveries about the generation of complexity in cellular automata (like rule 30). And somehow they conflate these things—leading to all too many books etc. that show pictures of simple gliders in the Game of Life and say “Look at all this complexity!”

\n

At some level it’s a confusion between science and engineering. My efforts around cellular automata have centered on empirical science questions like “What does this cellular automaton do if you run it?” But—as I’ve discussed at length above—most of what’s been done with the Game of Life has centered instead on questions of engineering, like “What recognizable (or useful) structures can you build in the system?” It’s a different objective, with different results. And, in particular, by asking to “engineer understandable technology” one’s specifically eschewing the phenomenon of computational irreducibility—and the whole story of the emergence of complexity that’s been so central to my own scientific work on cellular automata and so much else.

\n

Many times over the years, people would show me things they’d been able to build in the Game of Life—and I really wouldn’t know what to make of them. Yes, they seemed like impressive hacks. But what was the big picture? Was this just fun, or was there some broader intellectual point? Well, finally, not long ago I realized: this is not a story of science, it’s a story about the arc of engineering, or what one can call “metaengineering”.

\n

And back in 2018, in connection with the upcoming 50th anniversary of the Game of Life, I decided to see what I could figure out about this. But I wasn’t satisfied with how far I got, and other priorities interceded. So—beyond one small comment that ended up in a 2020 New York Times article—I didn’t write anything about what I’d done. And the project languished. Until now. When somehow my long-time interest in “alien engineering”, combined with my recent results about biological evolution coalesced into a feeling that it was time to finally figure out what we could learn from all that effort that’s been put into the Game of Life.

\n

In a sense this brings closure to a very long-running story for me. The first time I heard about the Game of Life was in 1973. I was an early teenager then, and I’d just gotten access to a computer. By today’s standards the computer (an Elliott 903C) was a primitive one: the size of a desk, programmed with paper tape, with only 24 kilobytes of memory. I was interested in using it for things like writing a simulator for the physics of idealized gas molecules. But other kids who had access to the computer were instead more interested (much as many kids might be today) in writing games. Someone wrote a “Hunt the Wumpus” game. And someone else wrote a program for the “Game of Life”. The configurations of cells at each generation were printed out on a teleprinter. And for some reason people were particularly taken with the “Cheshire cat” configuration, in which all that was left at the end (as in Alice in Wonderland) was a “smile”. At the time, I absolutely didn’t see the point of any of this. I was interested in science, not games, and the Game of Life pretty much lost me at “Game”.

\n

For a number of years I didn’t have any further contact with the Game of Life. But then I met Bill Gosper, who I later learned had in 1970 discovered the glider gun in the Game of Life. I met Gosper first “online” (yes, even in 1978 that was a thing, at least if you used the MIT-MC computer through the ARPANET)—then in person in 1979. And in 1980 I visited him at Xerox PARC, where he described himself as part of the “entertainment division” and gave me strange math formulas printed on a not-yet-out-of-the-lab color laser printer

\n

Bill Gosper math formulas

\n

and also showed me a bitmapped display (complete with GUI) with lots of pixels dancing around that he enthusiastically explained were showing the Game of Life. Knowing what I know now, I would have been excited by what I saw. But at the time, it didn’t really register.

\n

Still, in 1981, having started my big investigation of 1D cellular automata, and having made the connection to the 2D case of the Game of Life, I started wondering whether there was something “scientifically useful” that I could glean from all the effort I knew (particularly from Gosper) had been put into Life. It didn’t help that almost none of the output of that effort had been published. And in those days before the web, personal contact was pretty much the only way to get unpublished material. One of my larger “finds” was from a friend of mine from Oxford who passed on “lab notebook pages” he’d got from someone who was enumerating outcomes from different Game-of-Life initial configurations:

\n

Game-of-Life initial configurations

\n

And from material like this, as well as my own simulations, I came up with some tentative “scientific conclusions”, which I summarized in 1982 in a paragraph in my first big paper about cellular automata:

\n

Stephen Wolfram cellular automata paper

\n

But then, at the beginning of 1983, as part of my continuing effort to do science on cellular automata, I made a discovery. Among all cellular automata there seemed to be four basic classes of behavior, with class 4 being characterized by the presence of localized structures, sometimes just periodic, and sometimes moving:

\n

Class 4 cellular automata behavior

\n
\n
\n

\n

I immediately recognized the analogy to the Game of Life, and to oscillators and gliders there. And indeed this analogy was part of what “tipped me off” to thinking about the ubiquitous computational capabilities of cellular automata, and to the phenomenon of computational irreducibility.

\n

Meanwhile, in March 1983, I co-organized what was effectively the first-ever conference on cellular automata (held at Los Alamos)—and one of the people I invited was Gosper. He announced his Hashlife algorithm (which was crucial to future Life research) there, and came bearing gifts: printouts for me of Life, that I annotated, and still have in my archives:

\n

Annotated printouts

\n

I asked Gosper to do some “more scientific” experiments for me—for example starting from a region of randomness, then seeing what happened:

\n

More Game-of-Life experiments

\n

But Gosper really wasn’t interested in what I saw as being science; he wanted to do engineering, and make constructions—like this one he gave me, showing two glider guns exchanging streams of gliders (why would one care, I wondered):

\n

Two glider guns

\n

I’d mostly studied 1D cellular automata—where I’d discovered a lot by systematically looking at their behavior “laid out in spacetime”. But in early 1984 I resolved to also systematically check out 2D cellular automata. And mostly the resounding conclusion was that their basic behavior was very similar to 1D. Out of all the rules we studied, the Game of Life didn’t particularly stand out. But—mostly to provide a familiar comparison point—I included pictures of it in the paper we wrote:

\n

2D cellular automata

\n

And we also went to the trouble of making a 3D “spacetime” picture of the Game of Life on a Cray supercomputer—though it was too small to show anything terribly interesting:

\n

3D Game of Life

\n

It had been a column in Scientific American in 1970 that had first propelled the Game of Life to public prominence—and that had also launched the first great Life engineering challenge of finding a glider gun. And in both 1984 and 1985 a successor to that very same column ran stories about my 1D cellular automata. And in 1985, in collaboration with Scientific American, I thought it would be fun and interesting to reprise the 1970 glider gun challenge, but now for 1D class 4 cellular automata:

\n

Glider gun guidelines paper

\n

Many people participated. And my main conclusion was: yes, it seemed like one could do the same kinds of engineering in typical 1D class 4 cellular automata as one could in the Game of Life. But this was all several years before the web, and the kind of online community that has driven so much Game of Life engineering in modern times wasn’t yet able to form.

\n

Meanwhile, by the next year, I was starting the development of Mathematica and what’s now the Wolfram Language, and for a few years didn’t have much time to think about cellular automata. But in 1987 when Gosper got involved in making pre-release demos of Mathematica he once again excitedly told me about his discoveries in the Game of Life, and gave me pictures like:

\n

Gliders guns using early Mathematica

\n

It was in 1992 that the Game of Life once again appeared in my life. I had recently embarked on what would become the 10-year project of writing my book A New Kind of Science. I was working on one of the rather few “I already have this figured out” sections in the book—and I wanted to compare class 4 behavior in 1D and 2D. How was I to display the Game of Life, especially in a static book? Equipped with what’s now the Wolfram Language it was easy to come up with visualizations—looking “out” into a spacetime slice with more distant cells “in a fog”, as well as “down” into a fog of successive states:

\n

Game of Life in A New Kind of Science

\n

And, yes, it was immediately striking how similar the spacetime slice looked to my pictures of 1D class 4 cellular automata. And when I wrote a note for the end of the book about Life, the correspondence became even more obvious. I’d always seen the glider gun as a movie. But in a spacetime slice it “made much more sense”, and looked incredibly similar to analogous structures in 1D class 4 cellular automata:

\n

Different glider guns

\n

In A New Kind of Science I put a lot of effort into historical notes. And as a part of such a note on “History of cellular automata” I had a paragraph about the Game of Life:

\n

Historical note

\n

I first met John Conway in September 1983 (at a conference in the south of France). As I would tell his biographer many years later, my relationship with Conway was complicated from the start. We were both drawn to systems defined by very simple rules, but what we found interesting about them was very different. I wanted to understand the big picture and to explore science-oriented questions (and what I would now call ruliology). Conway, on the other hand, was interested in specific, often whimsically presented results—and in questions that could be couched as mathematical theorems.

\n

In my conversations with Conway, the Game of Life would sometimes come up, but Conway never seemed too interested in talking about it. In 2001, though, when I was writing my note about the history of 2D cellular automata, I spent several hours specifically asking Conway about the Game of Life and its history. At first Conway told me the standard origin story that Life had arisen as a kind of game. A bit later he said he’d at the time just been hired as a logic professor, and had wanted to use Life as a simple way to enumerate the recursive functions. In the end, it was hard to disentangle true recollections from false (or “elaborated”) ones. And, notably, when asked directly about the origin of the specific rules of Life, he was evasive. Of course, none of that should detract from Conway’s achievement in the concept of the Game of Life, and in the definition of the hacker-like culture around it—the fruits of which have now allowed me to do what I’ve done here.

\n

For many years after the publication of A New Kind of Science in 2002, I didn’t actively engage with the Game of Life—though I would hear from Life enthusiasts with some frequency, but none as much as Gosper, from whom I was a recipient of hundreds of messages about Life, a typical example from 2017 concerning

\n
\n
\n

\n

and saying:

\n

\n
\n
\n
Subject: Re: Unlimited(?) novelty from just two LWSS backrakes
\n
\n

\n
\n

Novelty is mediated by the sporadic glider gas (which forms very sparse
\nbeams), sporadic debris (forming sparse lines), and is hidden in sporadic
\ndefects in the denser beams and lines. At this scale, each screen pixel
\nrepresents 262144 x 262144 Life cells. Thus very sparse lines, e.g. density
\n10^-5, appear solid, while being very nearly transparent to gliders.

\n

\"Glider

\n

After 3.4G, (sparse) new glider beams​ are still fading up. The beams
\nrepeatedly strafe the x and y axis stalagmites.

\n

\"x

\n

I suspect this will (very) eventually lead to a positive density of
\nswitch-engines, and thus quadratic population growth.

\n

\n

Finally, around 4.2G, an eater1 (fish hook):

\n

\"Eater1

\n

Depending on background novelty radiation, there ought to be one of
\nthese every few billion, all lying on a line through the origin.

\n

\n

With much help from Tom R, I slogged to 18G, with *zero* new nonmovers
\nin the 4th quadrant, causing me to propose a mechanism that precluded
\nfuture new ones. But then Andrew Trevorrow fired up his Big Mac (TM),
\nran 60G, and found three new nonmovers! They are, respectively, a mirror
\nimage(!) of the 1st eater, and two blinkers, in phase, but not aligned with
\nthe origin. I.e., all four are "oners'", or at least will lie on different
\ntrash trails.

\n

I’m still waiting for one of these to sprout switch-engines and begin quadratic
\ngrowth. But here’s a puzzle: Doesn’t the gas of sparse gliders (actually glider
\npackets) in the diagonal strips athwart the 1st quadrant already reveal (small
\ncoefficient) quadratic growth? Which will *eventually* dominate? The area of the
\nstrips is increasing quadratically. Their density *appears* to be at least holding,
\nbut possibly along only one axis. I don’t see where quadratically many gliders could
\narise. They’re being manufactured at a (roughly) fixed rate. Imagine the above
\npicture in the distant future. Where is the amplification that will keep those
\nstrips full? ‐‐Bill

\n
\n
\n

\n

Does it just happen to come out that way, or was it somehow made to be that way? It was a big shock to my intuition at the beginning of the 1980s when I began to see all the richness that even very simple programs can produce. And it made me start to wonder about all our technological and other achievements. With our goals and intentions, were we producing things that were somehow different from what even simple programs “could do anyway”? How would we be able to tell whether that interstellar radio signal was the product of some sophisticated civilization, or just something that “happened naturally”? My Principle of Computational Equivalence implied that at an ultimate level there wouldn’t really be a way to tell. But I kept on wondering whether there might at least be some signature of “purposes like ours” that we could detect.

\n

At first it was extraterrestrial intelligence and animal intelligence, later also artificial intelligence. But the question kept on coming back: what distinguishes what’s engineered from what just “scientifically happens”? (And, yes, there was a theological question in there too.) I had wondered for a while about using the Game of Life as a testing ground for this, and as the 50th anniversary of the Game of Life approached in 2018, I took this as a cue to explore it.

\n

Over the years I had accumulated a paper file perhaps 6 inches thick about the Game of Life (a few samples from which I’ve shown above). But looking around the web I was impressed at how much well-organized material there now was out there about the Game of Life. I started to try to analyze it, imagining that I might see something like an analog of Moore’s law. Meanwhile, over the preceding decade I had written a lot about the history of science, and I thought that as part of my contribution to the 50th anniversary of the Game of Life I should try to write about its history. What were the stories of all those people whose names were attached to discoveries in Life? A research assistant of mine began to track them down, and interview them. It turned out to be a very disparate group, many of whom knew little about each other. (Though they often, but not always, had in common graduate-level education in math.) And in any case it became clear that writing a coherent history was going to be a huge undertaking. In addition, the first few ways I tried to discern trends in data about the Game of Life data didn’t yield much. And soon the 50th anniversary had passed—and I got busy with other things.

\n

But the project of studying the “metaengineering” of the Game of Life stayed on my “to do” list (and a couple of students at our Wolfram Summer School worked on it). Then in 2022 a nice book on the Game of Life came out (by Nathaniel Johnston and Dave Greene, the latter of whom had actually been at our Summer School back in 2011). Had my project been reduced to just reading this book, I wondered. I soon realized that it hadn’t. And there were now all kinds of questions on which I imagined a study of the Game of Life could shed light. Not only questions about the “signature of purpose”. But also questions about novelty and creativity. And about the arc and rhythm of innovation.

\n

Then in 2024 came the surprises of my work on biological evolution, and on machine learning. And I found myself again wondering about how things work when there’s “intentional engineering”. And so I finally decided to do my long-planned study of the Game of Life. There’s much, much more that can be done. But I think what I’ve done here provides an indication of some of the directions one can go, and some of what there is to discover in what is effectively the new field of “computational metaengineering”.

\n

Notes

\n

Thanks to Willem Nielsen of the Wolfram Institute for extensive help, as well as to Ed Pegg of Wolfram Research. (Thanks also to Brad Klee for earlier work.) Over the years, I’ve interacted with many people about the Game of Life. In rough order of my first (“Life”) interactions with them, these include: Jeremy Barford (1973), Philip Gladstone (1973), Nicholas Goulder (1973), Norman Routledge (1973), Bill Gosper (1979), Tim Robinson (1981), Paul Leyland (1981), Norman Margolus (1982), John Conway (1983), Brian Silverman (1985), Eric Weisstein (1999), Ed Pegg (2000), Jon C. R. Bennett (2006), Robert Wainwright (2010), Dave Greene (2011), Steve Bourne (2018), Tanha Kate (2018), Simon Norton (2018), Adam Goucher (2019), Keith Patarroyo (2021), Steph Macurdy (2021), Mark McAndrew (2022), Richard Assar (2024) and Nigel Martin (2025). And, of course, thanks to the many people who’ve contributed over the past half century to the historical progression of Life engineering that I’ve been analyzing here.

\n

Note added April 24, 2025: Thanks to Dave Greene who pointed out an incorrect historical inference, which has now been updated.

\n", + "category": "Data Science", + "link": "https://writings.stephenwolfram.com/2025/03/what-can-we-learn-about-engineering-and-innovation-from-half-a-century-of-the-game-of-life-cellular-automaton/", + "creator": "Stephen Wolfram", + "pubDate": "Tue, 18 Mar 2025 18:25:33 +0000", + "enclosure": "https://content.wolfram.com/sites/43/2025/03/anim2.mp4", + "enclosureType": "video/mp4", + "image": "https://content.wolfram.com/sites/43/2025/03/anim2.mp4", + "id": "", + "language": "en", + "folder": "", + "feed": "wolfram", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "f55fb047a110bd31ec27478c191d54d7", + "highlights": [] + }, + { + "title": "Towards a Computational Formalization for Foundations of Medicine", + "description": "\"\"A Theory of Medicine? As it’s practiced today, medicine is almost always about particulars: “this has gone wrong; this is how to fix it”. But might it also be possible to talk about medicine in a more general, more abstract way—and perhaps to create a framework in which one can study its essential features without […]", + "content": "\"\"

\"Towards

\n

A Theory of Medicine?

\n

As it’s practiced today, medicine is almost always about particulars: “this has gone wrong; this is how to fix it”. But might it also be possible to talk about medicine in a more general, more abstract way—and perhaps to create a framework in which one can study its essential features without engaging with all of its details?

\n

My goal here is to take the first steps towards such a framework. And in a sense my central result is that there are many broad phenomena in medicine that seem at their core to be fundamentally computational—and to be captured by remarkably simple computational models that are readily amenable to study by computer experiment.

\n

I should make it clear at the outset that I’m not trying to set up a specific model for any particular aspect or component of biological systems. Rather, my goal is to “zoom out” and create what one can think of as a “metamodel” for studying and formalizing the abstract foundations of medicine.

\n

What I’ll be doing builds on my recent work on using the computational paradigm to study the foundations of biological evolution. And indeed in constructing idealized organisms we’ll be using the very same class of basic computational models. But now, instead of considering idealized genetic mutations and asking what types of idealized organisms they produce, we’re going to be looking at specific evolved idealized organisms, and seeing what effect perturbations have on them. Roughly, the idea is that an idealized organism operates in its normal “healthy” way if there are no perturbations—but perturbations can “derail” its operation and introduce what we can think of as “disease”. And with this setup we can then think of the “fundamental problem of medicine” as being the identification of additional perturbations that can “treat the disease” and put the organism at least approximately back on its normal “healthy” track.

\n

As we’ll see, most perturbations lead to lots of detailed changes in our idealized organism, much as perturbations in biological organisms normally lead to vast numbers of effects, say at a molecular level. But as in medicine, we can imagine that all we can observe (and perhaps all we care about) are certain coarse-grained features or “symptoms”. And the fundamental problem of medicine is then to work out from these symptoms what “treatment” (if any) will end up being useful. (By the way, when I say “symptoms” I mean the whole cluster of signs, symptoms, tests, etc. that one might in practice use, say for diagnosis.)

\n

It’s worth emphasizing again that I’m not trying here to derive specific, actionable, medical conclusions. Rather, my goal is to build a conceptual framework in which, for example, it becomes conceivable for general phenomena in medicine that in the past have seemed at best vague and anecdotal to begin to be formalized and studied in a systematic way. At some level, what I’m trying to do is a bit like what Darwinism did for biological evolution. But in modern times there’s a critical new element: the computational paradigm, which not only introduces all sorts of new, powerful theoretical concepts, but also leads us to the practical methodology of computer experimentation. And indeed much of what follows is based on the (often surprising) results of computer experiments I’ve recently done that give us raw material to build our intuition—and structure our thinking—about fundamental phenomena in medicine.

\n

A Minimal Metamodel

\n

How can we make a metamodel of medicine? We need an idealization of biological organisms and their behavior and development. We need an idealization of the concept of disease for such organisms. And we need an idealization of the concept of treatment.

\n

For our idealization of biological organisms we’ll use a class of simple computational systems called cellular automata (that I happen to have studied since the early 1980s). Here’s a specific example:

\n
\n
\n

\n

What’s going on here is that we’re progressively constructing the pattern on the left (representing the development and behavior of our organism) by repeatedly applying cases of the rules on the right (representing the idealized genome—and other biochemical, etc. rules—of our organism). Roughly we can think of the pattern on the left as corresponding to the “life history” of our organism—growing, developing and eventually dying as it goes down the page. And even though there’s a rather organic look to the pattern, remember that the system we’ve set up isn’t intended to provide a model for any particular real-world biological system. Rather, the goal is just for it to capture enough of the foundations of biology that it can serve as a successful metamodel to let us explore our questions about the foundations of medicine.

\n

Looking at our model in more detail, we see that it involves a grid of squares—or “cells” (computational, not biological)—each having one of 4 possible colors (white and three others). We start from a single red “seed” cell on the top row of the grid, then compute the colors of cells on subsequent steps (i.e. on subsequent rows down the page) by successively applying the rules on the right. The rules here are basically very simple. But we can see that when we run them they lead to a fairly complicated pattern—which in this case happens to “die out” (i.e. all cells become white) after exactly 101 steps.

\n

So what happens if we perturb this system? On the left here we’re showing the system as above, without perturbation. But on the right we’re introducing a perturbation by changing the color of a particular cell (on step 16)—leading to a rather different (if qualitatively similar) pattern:

\n
\n
\n

\n

Here are the results of some other perturbations to our system:

\n
\n
\n

\n

Some perturbations (like the one in the second panel here) quickly disappear; in essence the system quickly “heals itself”. But in most cases even single-cell perturbations like the ones here have a long-term effect. Sometimes they can “increase the lifetime” of the organism; often they will decrease it. And sometimes—like in the last case shown here—they will lead to essentially unbounded “tumor-like” growth.

\n

In biological or medical terms, the perturbations we’re introducing are minimal idealizations of “things that can happen to an organism” in the course of its life. Sometimes the perturbations will have little or no effect on the organism. Or at least they won’t “really hurt it”—and the organism will “live out its natural life” (or even extend it a bit). But in other cases, a perturbation can somehow “destabilize” the organism, in effect “making it develop a disease”, and often making it “die before its time”.

\n

But now we can formulate what we can think of as the “fundamental problem of medicine”: given that perturbations have had a deleterious effect on an organism, can we find subsequent perturbations to apply that will serve as a “treatment” to overcome the deleterious effect?

\n

The first panel here shows a particular perturbation that makes our idealized organism die after 47 steps. The subsequent panels then show various “treatments” (i.e. additional perturbations) that serve at least to “keep the organism alive”:

\n
\n
\n

\n

In the later panels here the “life history” of the organism gets closer to the “healthy” unperturbed form shown in the final panel. And if our criterion is restoring overall lifetime, we can reasonably say that the “treatment has been successful”. But it’s notable that the detailed “life history” (and perhaps “quality of life”) of the organism will essentially never be the same as before: as we’ll see in more detail later, it’s almost inevitably the case that there’ll be at least some (and often many) long-term effects of the perturbation+treatment even if they’re not considered deleterious.

\n

So now that we’ve got an idealized model of the “problem of medicine”, what can we say about solving it? Well, the main thing is that we can get a sense of why it’s fundamentally hard. And beyond anything else, the central issue is a fundamentally computational one: the phenomenon of computational irreducibility.

\n

Given any particular cellular automaton rule, with any particular initial condition, one can always explicitly run the rule, step by step, from that initial condition, to see what will happen. But can one do better? Experience with mathematical science might make one imagine that as soon as one knows the underlying rule for a system, one should in principle immediately be able to “solve the equations” and jump ahead to work out everything about what the system does, without explicitly tracing through all the steps. But one of the central things I discovered in studying simple programs back in the early 1980s is that it’s common for such systems to show what I called computational irreducibility, which means that the only way to work out their detailed behavior is essentially just to run their rules step by step and see what happens.

\n

So what about biology? One might imagine that with its incremental optimization, biological evolution would produce systems that somehow avoid computational irreducibility, and (like simple machinery) have obvious easy-to-understand mechanisms by which they operate. But in fact that’s not what biological evolution typically seems to produce. And instead—as I’ve recently argued—what it seems to do is basically just to put together randomly found “lumps of irreducible computation” that happen to satisfy its fitness criterion. And the result is that biological systems are full of computational irreducibility, and mostly aren’t straightforwardly “mechanically explainable”. (The presence of computational irreducibility is presumably also why theoretical biology based on mathematical models has always been so challenging.)

\n

But, OK, given all this computational irreducibility, how is it that medicine is even possible? How is it that we can know enough about what a biological system will do to be able to determine what treatment to use on it? Well, computational irreducibility makes it hard. But it’s a fundamental feature of computational irreducibility that within any computationally irreducible process there must always be pockets of computational reducibility. And if we’re trying to achieve only some fairly coarse objective (like maximizing overall lifetime) it’s potentially possible to leverage some pocket of computational reducibility to do this.

\n

(And indeed pockets of computational reducibility within computational irreducibility are what make many things possible—including having understandable laws of physics, doing higher mathematics, etc.)

\n

The Diversity and Classification of Disease

\n

With our simple idealization of disease as the effect of perturbations on the life history of our idealized organism, we can start asking questions like “What is the distribution of all possible diseases?”

\n

And to begin exploring this, here are the patterns generated with a random sample of the 4383 possible single-point perturbations to the idealized organism we’ve discussed above:

\n
\n
\n

\n

Clearly there’s a lot of variation in these life histories—in effect a lot of different symptomologies. If we average them all together we lose the detail and we just get something close to the original:

\n
\n
\n

\n

But if we look at the distribution of lifetimes, we see that while it’s peaked at the original value, it nevertheless extends to both shorter and longer values:

\n
\n
\n

\n

In medicine (or at least Western medicine) it’s been traditional to classify “things that can go wrong” in terms of discrete diseases. And we can imagine also doing this in our simple model. But it’s already clear from the array of pictures above that this is not going to be a straightforward task. We’ve got a different detailed pattern for every different perturbation. So how should we group them together?

\n

Well—much as in medicine—it depends on what we care about. In medicine we might talk about signs and symptoms, which in our idealized model we can basically identify with features of patterns. And as an example, we might decide that the only features that matter are ones associated with the boundary shape of our pattern:

\n
\n
\n

\n

So what happens to these boundary shapes with different perturbations? Here are the most frequent shapes found (together with their probabilities):

\n
\n
\n

\n

We might think of these as representing “common diseases” of our idealized organism. But what if we look at all possible “diseases”—at least all the ones produced by single-cell perturbations? Using boundary shape as our way to distinguish “diseases” we find that if we plot the frequency of diseases against their rank we get roughly a power law distribution (and, yes, it’s not clear why it’s a power law):

\n
\n
\n

\n

What are the “rare diseases” (i.e. ones with low frequency) like? Their boundary shapes can be quite diverse:

\n
\n
\n

\n

But, OK, can we somehow quantify all these “diseases”? For example, as a kind of “imitation medical test” we might look at how far to the left the boundary of each pattern goes. With single-point perturbations, 84% of the time it’s the same as in the unperturbed case—but there’s a distribution of other, “less healthy” results (here plotted on a log scale)

\n
\n
\n

\n

with extreme examples being:

\n
\n
\n

\n

And, yes, we could diagnose any pattern that goes further to the left than the unperturbed one as a case of, say, “leftiness syndrome”. And we might imagine that if we set up enough tests, we could begin to discriminate between many discrete “diseases”. But somehow this seems quite ad hoc.

\n

So can we perhaps be more systematic by using machine learning? Let’s say we just look at each whole pattern, then try to place it in an image feature space, say a 2D one. Here’s an example of what we get:

\n
\n
\n

\n

The details of this depend on the particulars of the machine learning method we’ve used (here the default FeatureSpacePlot method in Wolfram Language). But it’s a fairly robust result that “visually different” patterns end up separated—so that in effect the machine learning is successfully automating some kind of “visual diagnosis”. And there’s at least a little evidence that the machine learning will identify separated clusters of patterns that we can reasonably identify as “truly distinct diseases”—even as the more common situation is that between any two patterns, there are intermediate ones that aren’t neatly classified as one disease or the other.

\n

Somewhat in the style of the human “International Classification of Diseases” (ICD), we can try arranging all our patterns in a hierarchy—though it’s basically inevitable that we’ll always be able to subdivide further, and there’ll never be a clear point at which we can say “we’ve classified all the diseases”:

\n
\n
\n

\n

By the way, in addition to talking about possible diseases, we also need to discuss what counts as “healthy”. We could say that our organism is only “healthy” if its pattern is exactly what it would be without any perturbation (“the natural state”). But what probably better captures everyday medical thinking is to say that our organism should be considered “healthy” if it doesn’t have symptoms (or features) that we consider bad. And in particular, at least “after the fact” we might be able to say that it must have been healthy if its lifetime turned out to be long.

\n

It’s worth noting that even in our simple model, while there are many perturbations that reduce lifetime, there are also perturbations that increase lifetime. In the course of biological evolution, genetic mutations of the overall underlying rules for our idealized organism might have managed to achieve a certain longevity. But the point is that nothing says “longevity perturbations” applied “during the life of the organism” can’t get further—and indeed here are some examples where they do:

\n
\n
\n

\n

And, actually, in a feature that’s not (at least yet) reflected in human medicine, there are perturbations than can make the lifetime very significantly longer. And for the particular idealized organism we’re studying here, the most extreme examples obtained with single-point perturbations are:

\n
\n
\n

\n

OK, but what happens if we consider perturbations at multiple points? There are immediately vastly more possibilities. Here are some examples of the 10 million or so possible configurations of two perturbations:

\n
\n
\n

\n

And here are examples with three perturbations:

\n
\n
\n

\n

Here are examples if we try to apply five perturbations (though sometimes the organism is “already dead” before we can apply later perturbations):

\n
\n
\n

\n

What happens to the overall distribution of lifetimes in these cases? Already with two perturbations, the distribution gets much broader, and with three or more, the peak at the original lifetime has all but disappeared, with a new peak appearing for organisms that in effect die almost immediately:

\n
\n
\n

\n

In other words, the particular idealized organism that we’re studying is fairly robust against one perturbation, and perhaps even two, but with more perturbations it’s increasingly likely to succumb to “infant mortality”. (And, yes, if one increases the number of perturbations the “life expectancy” progressively decreases.)

\n

But what about the other way around? With multiple perturbations, can the organism in effect “live forever”? Here are some examples where it’s still “going strong” after 300 steps:

\n
\n
\n

\n

But after 500 steps most of these have died out:

\n
\n
\n

\n

As is typical in the computational universe (perhaps like in medicine) there are always surprises, courtesy of computational irreducibility. Like the sudden appearance of the obviously periodic case (with period 25):

\n
\n
\n

\n

As well as the much more complicated cases (where in the final pictures the pattern has been “rectified”):

\n
\n
\n

\n

So, yes, in these cases the organism does in effect “live forever”—though not in an “interesting” way. And indeed such cases might remind us of tumor-like behavior in biological organisms. But what about a case that not only lives forever, but also grows forever? Well, needless to say, lurking out in the computational universe, one can find an example:

\n
\n
\n

\n

The “incidence” of this behavior is about one in a million for 2 perturbations (or, more precisely, 7 out of 9.6 million possibilities), and one in 300,000 for 3 perturbations. And although there presumably are even more complicated behaviors out there to find, they don’t show up with 2 perturbations, and their incidence with 3 perturbations is below about one in 100 million.

\n

Diagnosis & Prognosis

\n

A fundamental objective in medicine is to predict from tests we do or symptoms and signs we observe what will happen. And, yes, we now know that computational irreducibility inevitably makes this in general hard. But also know from experience that a certain amount of prediction is possible—which we can now interpret as successfully managing to tap into pockets of computational reducibility.

\n

So as an example, let’s ask what the prognosis is for our idealized organism based on the width of its pattern we measure at a certain step. So here, for example, is what happens to the original lifetime distribution (in green) if we consider only cases where the width of the measured pattern after 25 steps is less than its unperturbed (“healthy”) value (and where we’re dropping the 1% of cases when the organism was “already dead” before 25 steps):

\n
\n
\n

\n

Our “narrow” cases represent about 5% of the total. Their median lifetime is 57, as compared with the overall median of 106. But clearly the median alone does not tell the whole story. And nor do the two survival curves:

\n
\n
\n

\n

And, for example, here are the actual widths as a function of time for all the narrow cases, compared to the sequence of widths for the unperturbed case:

\n
\n
\n

\n

These pictures don’t make it look promising that one could predict lifetime from the single test of whether the pattern was narrow at step 25. Like in analogous medical situations, one needs more data. One approach in our case is to look at actual “narrow” patterns (up to step 25)—here sorted by ultimate lifetime—and then to try to identify useful predictive features (though, for example, to attempt any serious machine learning training would require a lot more examples):

\n
\n
\n

\n

But perhaps a simpler approach is not just to do a discrete “narrow or not” test, but rather to look at the actual width at step 25. So here are the lifetimes as a function of width at step 25

\n
\n
\n

\n

and here’s the distribution of outcomes, together with the median in each case:

\n
\n
\n

\n

The predictive power of our width measurement is obviously quite weak (though there’s doubtless a way to “hack p values” to get at least something out). And, unsurprisingly, machine learning doesn’t help. Like here’s a machine learning prediction (based on decision tree methods) for lifetime as a function of width (that, yes, is very close to just being the median):

\n
\n
\n

\n

Does it help if we use more history? In other words, what happens if we make our prediction not just from the width at a particular step, but from the history of all widths up to that point? As one approach, we can make a collection of “training examples” of what lifetimes particular “width histories” (say up to step 25) lead to:

\n
\n
\n

\n

There’s already something of an issue here, because a given width history—which, in a sense is a “coarse graining” of the detailed “microscopic” history—can lead to multiple different final lifetimes:

\n
\n
\n

\n

But we can still go ahead and try to use machine learning to predict lifetimes from width histories based on training on (say, half) of our training data—yielding less than impressive results (with the vertical line being associated with multiple lifetimes from a single width history in the training data):

\n
\n
\n

\n

So how can we do better? Well, given the underlying setup for our system, if we could determine not just the width but the whole precise sequence of values for all cells, even just at step 25, then in principle we could use this as an “initial condition” and run the system forward to see what it does. But regardless of it being “medically implausible” to do this, it isn’t much of a prediction anyway; it’s more just “watch and see what happens”. And the point is that insofar as there’s computational irreducibility, one can’t expect—at least in full generality—to do much better. (And, as we’ll argue later, there’s no reason to think that organisms produced by biological evolution will avoid computational irreducibility at this level.)

\n

But still, within any computationally irreducible system, there are always pockets of computational reducibility. So we can expect that there will be some predictions that can be made. But the question is whether those predictions will be about things we care about (like lifetime) or even about things we can measure. Or, in other words, will they be predictions that speak to things like symptoms?

\n

Our Physics Project, for example, involves all sorts of underlying processes that are computationally irreducible. But the key point there is that what physical observers like us perceive are aggregate constructs (like overall features of space) that show significant computational reducibility. And in a sense there’s an analogous issue here: there’s computational irreducibility underneath, but what do “medical observers” actually perceive, and are there computationally reducible features related to that? If we could find such things, then in a sense we’d have identified “general laws of medicine” much like we now have “general laws of physics”.

\n

The Problem of Finding Treatments

\n

We’ve talked a bit about giving a prognosis for what will happen to an idealized organism that’s suffered a perturbation. But what about trying to fix it? What about trying to intervene with another “treatment perturbation” that can “heal” the system, and give it a life history that’s at least close to what it would have had without the original perturbation?

\n

Here’s our original idealized organism, together with how it behaves when it “suffers” a particular perturbation that significantly reduces its lifetime:

\n
\n
\n

\n

But what happens if we now try applying a second perturbation? Here are a few random examples:

\n
\n
\n

\n

None of these examples convincingly “heal” the system. But let’s (as we can in our idealized model) just enumerate all possible second perturbations (here 1554 of them). Then it turns out that a few of these do in fact successfully give us patterns that at least exactly reproduce the original lifetime:

\n
\n
\n

\n

Do these represent true examples of “healing”? Well, it depends on what we mean. Yes, they’ve managed to make the lifetime exactly what it would have been without the original “disease-inducing” perturbation. But in essentially all cases we see here that there are various “long-term side effects”—in the sense that the detailed patterns generated end up having obvious differences from the original unperturbed “healthy” form.

\n

The one exception here is the very first case, in which the “disease was caught early enough” that the “treatment perturbation” manages to completely heal the effects of the “disease perturbation”:

\n
\n
\n

\n

We’ve been talking here about intervening with “treatment perturbations” to “heal” our “disease perturbation”. But actually it turns out that there are plenty of “disease perturbations” which automatically “heal themselves”, without any “treatment” intervention. In fact, of all possible 4383 single perturbations, 380 essentially heal themselves.

\n

In many cases, the “healing” happens very locally, after one or two steps:

\n
\n
\n

\n

But there are also more complicated cases, where perturbations produce fairly large-scale changes in the pattern—that nevertheless “spontaneously heal themselves”:

\n
\n
\n

\n

(Needless to say, in cases where a perturbation “spontaneously heals itself”, adding a “treatment perturbation” will almost always lead to a worse outcome.)

\n

So how should we think about perturbations that spontaneously heal themselves? They’re like seeds for diseases that never take hold, or like diseases that quickly burn themselves out. But from a theoretical point of view we can think of them as being where the unperturbed life history of our idealized organism is acting as attractor, to which certain perturbed states inexorably converge—a bit like how friction can dissipate perturbations to patterns of motion in a mechanical system.

\n

But let’s say we have a perturbation that doesn’t “spontaneously heal itself”. Then to remediate it we have to “do the medical thing” and in our idealized model try to find a “treatment perturbation”. So how might we systematically set about doing that? Well, in general, computational irreducibility makes it difficult. And as one indication of this, this shows what lifetime is achieved by “treatment perturbations” made at each possible point in the pattern (after the initial perturbation):

\n
\n
\n

\n

We can think of this as providing a map of what the effects of different treatment perturbations will be. Here are some other examples, for different initial perturbations (or, in effect, different “diseases”):

\n
\n
\n

\n

There’s some regularity here. But the main observation is that different detailed choices of treatment perturbations will often have very different effects. In other words, even “nearby treatments” will often lead to very different outcomes. Given computational irreducibility, this isn’t surprising. But in a sense it underscores the difficulty of finding and applying “treatments”. By the way, cells indicated in dark red above are ones where treatment leads to a pattern that lives “excessively long”—or in effect shows tumor-like characteristics. And the fact that these are scattered so seemingly randomly reflects the difficulty of predicting whether such effects will occur as a result of treatment.

\n

In what we’ve done so far here, our “treatment” has always consisted of just a single additional perturbation. But what about applying more perturbations? For example, let’s say we do a series of experiments where after our first “treatment perturbation” we progressively try other treatment perturbations. If a given additional perturbation doesn’t get further from the desired lifetime, we keep it. Otherwise we reject it, and try another perturbation. Here’s an example of what happens if we do this:

\n
\n
\n

\n

The highlighted panels represent perturbations we kept. And here’s how the overall lifetime “converges” over successive iterations in our experiment:

\n
\n
\n

\n

In what we just did, we allowed additional treatment perturbations to be added at any subsequent step. But what if we require treatment perturbations to always be added on successive steps—starting right after the “disease perturbation” occurred? Here’s an example of what happens in this case:

\n
\n
\n

\n

And here’s what we see zooming in at the beginning:

\n
\n
\n

\n

In a sense this corresponds to “doing aggressive treatment” as soon as the initial “disease perturbation” has occurred. And a notable feature of the particular example here is that when our succession of treatment perturbations have succeeded in “restoring the lifetime” (which happens fairly quickly), the life history they produce is similar (though not identical) to the original unperturbed case.

\n

That definitely doesn’t always happen, as this example illustrates—but it’s fairly common:

\n
\n
\n

\n

It’s worth pointing out that if we allowed ourselves to do many single perturbations at the same time (i.e. on the same row of the pattern) we could effectively just “define new initial conditions” for the pattern, and, for example, perfectly “regenerate” the original unperturbed pattern after this “reset”. And in general we can imagine in effect “hot-wiring” the organism by applying large numbers of treatment perturbations that just repeatedly direct it back to its unperturbed form.

\n

But such extensive and detailed “intervention”—that in effect replaces the whole state of the organism—seems far from what might be practical in typical (current) medicine (except perhaps in some kind of “regenerative treatment”). And indeed in actual (current) medicine one is normally operating in a situation where one does not have anything close to perfect “cell-by-cell” information on the state of an organism—and instead one has to figure out things like what treatment to give based on much coarser “symptom-level” information. (In some ways, though, the immune system does something closer to cell-by-cell “treatment”.)

\n

So what can one do given coarse-grained information? As one example, let’s consider trying to predict what treatment perturbation will be best using the kind of pattern-width information we discussed above. Specifically, let’s say that we have the history of the overall width of a pattern up to a particular point, then from this we want to predict what treatment perturbation will lead to the best lifetime outcome for the system. There are a variety of ways we could approach this, but one is to make predictions of where to apply a treatment perturbation using machine learning trained on examples of optimal such perturbations.

\n

This is analogous to what we did in the previous section in applying machine learning to predict lifetime from width history. But now we want to predict from width history what treatment perturbation to apply. To generate our training data we can search for treatment perturbations that lead to the unperturbed lifetime when starting from life histories with a given width history. Now we can use a simple neural net to create a predictor that tries to tell us from a width history what “treatment to give”. And here are comparisons between our earlier search results based on looking at complete life histories—and (shown with red arrows) the machine learning predictions based purely on width history before the original disease perturbation:

\n
\n
\n

\n

It’s clear that the machine learning is doing something—though it’s not as impressive as perhaps it looks, because a wide range of perturbations all in fact give rather similar life histories. So as a slightly more quantitative indication of what’s going on, here’s the distribution of lifetimes achieved by our machine-learning-based therapy:

\n
\n
\n

\n

Our “best treatment” was able to give lifetime 101 in all these cases. And while the distribution we’ve now achieved looks peaked around the unperturbed value, dividing this distribution by what we’d get without any treatment at all makes it clear that not so much was achieved by the machine learning we were able to do:

\n
\n
\n

\n

And in a sense this isn’t surprising; our machine learning—based, as it is, on coarse-grained features—is quite weak compared to the computational irreducibility of the underlying processes at work.

\n

The Effect of Genetic Diversity

\n

In what we’ve done so far, we’ve studied just a single idealized organism—with a single set of underlying “genetic rules”. But in analogy to the situation with humans, we can imagine a whole population of genetically slightly different idealized organisms, with different responses to perturbations, etc.

\n

Many changes to the underlying rules for our idealized organism will lead to unrecognizably different patterns, that don’t, for example, have the kind of finite-but-long lifetimes we’ve been interested in. But it turns out that in the rules for our particular idealized organism there are some specific changes that actually don’t have any effect at all—at least on the unperturbed pattern of behavior. And the reason for this is that in generating the unperturbed pattern these particular cases in the rule happen never to be used:

\n
\n
\n

\n

And the result is that any one of the 43 = 64 possible choices of outcomes for those cases in the rule will still yield the same unperturbed pattern. If there’s a perturbation, however, different cases in the rule can be sampled—including these ones. It’s as if cases in the rule that are initially “non-coding” end up being “coding” when the path of behavior is changed by a perturbation. (Or, said differently, it’s like different genes being activated when conditions are different.)

\n

So to make an idealized model of something like a population with genetic diversity, we can look at what happens with different choices of our (initially) “non-coding” rule outcomes:

\n
\n
\n

\n

Before the perturbation, all these inevitably show the same behavior, because they’re never sampling “non-coding” rule cases. But as soon as there’s a perturbation, the pattern is changed, and after varying numbers of steps, previously “non-coding” rule cases do get sampled—and can affect the outcome.

\n

Here are the distinct cases of what happens in all 64 “genetic variants”—with the red arrow in each case indicating where the pattern first differs from what it is with our original idealized organism:

\n
\n
\n

\n

And here is then the distribution of lifetimes achieved—in effect showing the differing consequences of this particular “disease perturbation” on all our genetic variants:

\n
\n
\n

\n

What happens with other “disease perturbations”? Here’s a sample of distributions of lifetimes achieved (where “__” corresponds to cases where all 64 genetic variants yield the same lifetime):

\n
\n
\n

\n

OK, so what about the overall lifetime distribution across all (single) perturbations for each of the genetic variants? The detailed distribution we get is different for each variant. But their general shape is always remarkably similar

\n
\n
\n

\n

though taking differences from the case of our original idealized organism reveals some structure:

\n
\n
\n

\n

As another indication of the effect of genetic diversity, we can plot the survival curve averaged over all perturbations, and compare the case for our original idealized organism with what happens if we average equally over all 64 genetic variants. The difference is small, but there is a longer tail for the average of the genetic variants than for our specific original idealized organism:

\n
\n
\n

\n

We’ve seen how our idealized genetic variation affects “disease”. But how does it affect “treatment”? For the “disease” above, we already saw that there’s a particular “treatment perturbation” that successfully returns our original idealized organism to its “natural lifespan”. So what happens if we apply this same treatment across all the genetic variants? In effect this is like doing a very idealized “clinical trial” of our potential treatment. And what we see is that the results are quite diverse—and indeed more diverse than from the disease on it own:

\n
\n
\n

\n

In essence what we’re seeing is that, yes, there are some genetic variants for which the treatment still works. But there are many for which there are (often fairly dramatic) side effects.

\n

Biological Evolution and Our Model Organism

\n

So where did the particular rule for the “model organism” we’ve been studying come from? Well, we evolved it—using a slight generalization of the idealized model for biological evolution that I recently introduced. The goal of our evolutionary process was to find a rule that generates a pattern that lives as long as possible, but not infinitely long—and that does so robustly even in the presence of perturbations. In essence we used lifetime (or, more accurately, “lifetime under perturbation”) as our “fitness function”, then progressively evolved our rule (or “genome”) by random mutations to try to maximize this fitness function.

\n

In more detail, we started from the null (“everything turns white”) rule, then successively made random changes to single cases in the rule (“point mutations”)—keeping the resulting rule whenever the pattern it generated had a lifetime (under perturbation) that wasn’t smaller (or infinite). And with this setup, here’s the particular (random) sequence of rules we got (showing for each rule the outcome for each of its 64 cases):

\n
\n
\n

\n

Many of these rules don’t “make progress” in the sense that they increase the lifetime under perturbation. But every so often there’s a “breakthrough”, and a rule with a longer lifetime under perturbation is reached:

\n
\n
\n

\n

And, as we see, the rule for the particular model organism we’ve been using is what’s reached at the end.

\n

In studying my recent idealized model for biological evolution, I considered fitness functions like lifetime that can directly be computed just by running the underlying rule from a certain initial condition. But here I’m generalizing that a bit, and considering as a fitness function not just lifetime, but “lifetime under perturbation”, computed by taking a particular rule, and finding the minimum lifetime of all patterns produced by it with certain random perturbations applied.

\n

So, for example, here the “lifetime under perturbation” would be considered to be the minimum of the lifetimes generated with no perturbation, and with certain random perturbations—or in this case 60:

\n
\n
\n

\n

This plot then illustrates how the (lifetime-under-perturbation) fitness (indicated by the blue line) behaves in the course of our adaptive evolution process, right around where the fitness-60 “breakthrough” above occurs:

\n
\n
\n

\n

What’s happening in this plot? At each adaptive step, we’re considering a new rule, obtained by a point mutation from the previous one. Running this rule we get a certain lifetime. If this lifetime is finite, we indicate it by a green dot. Then we apply a certain set of random perturbations—indicating the lifetimes we get by gray dots. (We could imagine using all sorts of schemes for picking the random perturbations; here what we’re doing is to perturb random points on about a tenth of the rows in the unperturbed pattern.)

\n

Then the minimum lifetime for any given rule we indicate by a red dot—and this is the fitness we assign to that rule. So now we can see the whole progression of our adaptive evolution process:

\n
\n
\n

\n

One thing that’s notable is that the unperturbed lifetimes (green dots) are considerably larger than the final minimum lifetimes (red dots). And what this means is that our requirement of “robustness”, implemented by looking at lifetime under perturbation rather than just unperturbed lifetime, considerably reduces the lifetimes we can reach. In other words, if our idealized organism is going to be robust, it won’t tend to be able to have as long a lifetime as it could if it didn’t have to “worry about” random perturbations.

\n

And to illustrate this, here’s a typical example of a much longer lifetime obtained by adaptive evolution with the same kind of rule we’ve been using (k = 4, r = 1 cellular automaton), but now with no perturbations and with fitness being given purely by the unperturbed lifetime (exactly as in my recent work on biological evolution):

\n
\n
\n

\n

OK, so given that we’re evolving with a lifetime-under-perturbation fitness function, what are some alternatives to our particular model organism? Here are a few examples:

\n
\n
\n

\n

At an overall level, these seem to react to perturbations much like our original model organism:

\n
\n
\n

\n

One notable feature here, though, is that there seems to be a tendency for simpler overall behavior to be less disrupted by perturbations. In other words, our idealized “diseases” seem to have less dramatic effects on “simpler” idealized organisms. And we can see a reflection of this phenomenon if we plot the overall (single-perturbation) lifetime distributions for the four rules above:

\n
\n
\n

\n

But despite detailed differences, the main conclusion seems to be that there’s nothing special about the particular model organism we’ve used—and that if we repeated our whole analysis for different model organisms (i.e. “different idealized species”) the results we’d get would be very much the same.

\n

What It Means and Where to Go from Here

\n

So what does all this mean? At the outset, it wasn’t clear there’d be a way to usefully capture anything about the foundations of medicine in a formalized theoretical way. But in fact what we’ve found is that even the very simple computational model we’ve studied seems to successfully reflect all sorts of features of what we see in medicine. Many of the fundamental effects and phenomena are, it seems, not the result of details of biomedicine, but instead are at their core purely abstract and computational—and therefore accessible to formalized theory and metamodeling. This kind of methodology is very different from what’s been traditional in medicine—and isn’t likely to lead directly to specific practical medicine. But what it can do is to help us develop powerful new general intuition and ways of reasoning—and ultimately an understanding of the conceptual foundations of what’s going on.

\n

At the heart of much of what we’ve seen is the very fundamental—and ubiquitous—phenomenon of computational irreducibility. I’ve argued recently that computational irreducibility is central to what makes biological evolution work—and that it’s inevitably imprinted on the core “computational architecture” of biological organisms. And it’s this computational irreducibility that inexorably leads to much of the complexity we see so ubiquitously in medicine. Can we expect to find a simple narrative explanation for the consequences of some perturbation to an organism? In general, no—because of computational irreducibility. There are always pockets of computational reducibility, but in general we can have no expectation that, for example, we’ll be able to describe the effects of different perturbations by neatly classifying them into a certain set of distinct “diseases”.

\n

To a large extent the core mission of medicine is about “treating diseases”, or in our terms, about remediating or reversing the effects of perturbations. And once again, computational irreducibility implies there’s inevitably a certain fundamental difficulty in doing this. It’s a bit like with the Second Law of thermodynamics, where there’s enough computational irreducibility in microscopic molecular dynamics that to seriously reverse—or outpredict—this dynamics is something that’s at least far out of range for computationally bounded observers like us. And in our medical setting the analog of that is that “computationally bounded interventions” can only systematically lead to medical successes insofar as they tap into pockets of computational reducibility. And insofar as they are exposed to overall computational irreducibility they will inevitably seem to show a certain amount of apparent randomness in their outcomes.

\n

In traditional approaches to medicine one ultimately tends to “give in to the randomness” and go no further than to assign probabilities to things. But an important feature of what we’ve done here is that in our idealized computational models we can always explicitly see what’s happening inside. Often—largely as a consequence of computational irreducibility—it’s complicated. But the fact that we can see it gives us the opportunity to get much more clarity about the fundamental mechanisms involved. And if we end up summarizing what happens by giving probabilities and doing statistics it’s because this is something we’re choosing to do, not something we’re forced to do because of our lack of knowledge of the systems we’re studying.

\n

There’s much to do in our effort to explore the computational foundations of medicine. But already there are some implications that are beginning to emerge. Much of the workflow of medicine today is based on classifying things that can go wrong into discrete diseases. But what we’ve seen here (which is hardly surprising given practical experience with medicine) is that when one looks at the details, a huge diversity of things can happen—whose characteristics and outcomes can’t really be binned neatly into discrete “diseases”.

\n

And indeed when we try to figure out “treatments” the details matter. As a first approximation, we might base our treatments on coarse graining into discrete diseases. But—as the approach I’ve outlined here can potentially help us analyze—the more we can directly go from detailed measurements to detailed treatments (through computation, machine learning, etc.), the more promising it’s likely to be. Not that it’s easy. Because in a sense we’re trying to beat computational irreducibility—with computationally bounded measurements and interventions.

\n

In principle one can imagine a future in which our efforts at treatment have much more computational sophistication (and indeed the immune system presumably already provides an example in nature). We can imagine things like algorithmic drugs and artificial cells that are capable of amounts of computation that are a closer match for the irreducible computation of an organism. And indeed the kind of formalized theory that I’ve outlined here is likely what one needs to begin to get an idea of how such an approach might work. (In the thermodynamic analogy, what we need to do is a bit like reversing entropy increase by sending in large numbers of “smart molecules”.)

\n

(By the way, seeing how difficult it potentially is to reverse the effects of a perturbation provides all the more impetus to consider “starting from scratch”—as nature does in successive generations of organisms—and simply wholesale regenerating elements of organisms, rather than trying to “fix what’s there”. And, yes, in our models this is for example like starting to grow again from a new seed, and letting the resulting pattern knit itself into the existing one.)

\n

One of the important features of operating at the level of computational foundations is that we can expect conclusions we draw to be very general. And we might wonder whether perhaps the framework we’ve described here could be applied outside of medicine. And to some extent I suspect it can—potentially to areas like robustness of large-scale technological and social systems and specifically things like computer security and computer system failures. (And, yes, much as in medicine one can imagine for example “classifying diseases” for computer systems.) But things likely won’t be quite the same in cases like these—because the underlying systems have much more human-determined mechanisms, and less “blind” adaptive evolution.

\n

But when it comes to medicine, the very presence of computational irreducibility introduced by biological evolution is what potentially allows one to develop a robust framework in which one can draw conclusions purely on the basis of abstract computational phenomena. Here I’ve just begun to scratch the surface of what’s possible. But I think we’ve already seen enough that we can be confident that medicine is yet another field whose foundations can be seen as fundamentally rooted in the computational paradigm.

\n

Thanks & Notes

\n

Thanks to Wolfram Institute researcher Willem Nielsen for extensive help.

\n

I’ve never written anything substantial about medicine before, though I’ve had many interactions with the medical research and biomedical communities over the years—that have gradually extended my knowledge and intuition about medicine. (Thanks particularly to Beatrice Golomb, who over the course of more than forty years has helped me understand more about medical reasoning, often emphasizing “Beatrice’s Law” that “Everything in medicine is more complicated than you can possibly imagine, even taking account of Beatrice’s Law”…)

\n", + "category": "Computational Science", + "link": "https://writings.stephenwolfram.com/2025/02/towards-a-computational-formalization-for-foundations-of-medicine/", + "creator": "Stephen Wolfram", + "pubDate": "Mon, 03 Feb 2025 23:27:46 +0000", + "enclosure": "", + "enclosureType": "", + "image": "", + "id": "", + "language": "en", + "folder": "", + "feed": "wolfram", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "e2c36ae1f87fa5b745c733eb966837dd", + "highlights": [] + }, + { + "title": "Launching Version 14.2 of Wolfram Language & Mathematica: Big Data Meets Computation & AI", + "description": "\"\"The Drumbeat of Releases Continues… Notebook Assistant Chat inside Any Notebook Bring Us Your Gigabytes! Introducing Tabular Manipulating Data in Tabular Getting Data into Tabular Cleaning Data for Tabular The Structure of Tabular Tabular Everywhere Algebra with Symbolic Arrays Language Tune-Ups Brightening Our Colors; Spiffing Up for 2025 LLM Streamlining & Streaming Streamlining Parallel Computation: […]", + "content": "\"\"\n
\n\n
\n

The Drumbeat of Releases Continues…

\n

Just under six months ago (176 days ago, to be precise) we released Version 14.1. Today I’m pleased to announce that we’re releasing Version 14.2, delivering the latest from our R&D pipeline.

\n

This is an exciting time for our technology, both in terms of what we’re now able to implement, and in terms of how our technology is now being used in the world at large. A notable feature of these times is the increasing use of Wolfram Language not only by humans, but also by AIs. And it’s very nice to see that all the effort we’ve put into consistent language design, implementation and documentation over the years is now paying dividends in making Wolfram Language uniquely valuable as a tool for AIs—complementing their own intrinsic capabilities.

\n

But there’s another angle to AI as well. With our Wolfram Notebook Assistant launched last month we’re using AI technology (plus a lot more) to provide what amounts to a conversational interface to Wolfram Language. As I described when we released Wolfram Notebook Assistant, it’s something extremely useful for experts and beginners alike, but ultimately I think its most important consequence will be to accelerate the ability to go from any field X to “computational X”—making use of the whole tower of technology we’ve built around Wolfram Language.

\n

So, what’s new in 14.2? Under the hood there are changes to make Wolfram Notebook Assistant more efficient and more streamlined. But there are also lots of visible extensions and enhancements to the user-visible parts of the Wolfram Language. In total there are 80 completely new functions—along with 177 functions that have been substantially updated.

\n

There are continuations of long-running R&D stories, like additional functionality for video, and additional capabilities around symbolic arrays. Then there are completely new areas of built-in functionality, like game theory. But the largest new development in Version 14.2 is around handling tabular data, and particularly, big tabular data. It’s a whole new subsystem for Wolfram Language, with powerful consequences throughout the system. We’ve been working on it for quite a few years, and we’re excited to be able to release it for the first time in Version 14.2.

\n

Talking of working on new functionality: starting more than seven years ago we pioneered the concept of open software design, livestreaming our software design meetings. And, for example, since the release of Version 14.1, we’ve done 43 software design livestreams, for a total of 46 hours (I’ve also done 73 hours of other livestreams in that time). Some of the functionality that’s now in Version 14.2 we started work on quite a few years ago. But we’ve been livestreaming long enough that pretty much anything that’s now in Version 14.2 we designed live and in public on a livestream at some time or another. It’s hard work doing software design (as you can tell if you watch the livestreams). But it’s always exciting to see the fruits of those efforts come to fruition in the system we’ve been progressively building for so long. And so, today, it’s a pleasure to be able to release Version 14.2 and to let everyone use the things we’ve been working so hard to build.

\n

Notebook Assistant Chat inside Any Notebook

\n

Last month we released the Wolfram Notebook Assistant to “turn words into computation”—and help experts and novices alike make broader and deeper use of Wolfram Language technology. In Version 14.1 the primary way to use Notebook Assistant is through the separate “side chat” Notebook Assistant window. But in Version 14.2 “chat cells” have become a standard feature of any notebook available to anyone with a Notebook Assistant subscription.

\n

Just type as the first character of any cell, and it’ll become a chat cell:

\n

Chat cell

\n

Now you can start chatting with the Notebook Assistant:

\n
\n
\n

\n

With the side chat you have a “separate channel” for communicating with the Notebook Assistant—that won’t, for example, be saved with your notebook. With chat cells, your chat becomes an integral part of the notebook.

\n

We actually first introduced Chat Notebooks in the middle of 2023—just a few months after the arrival of ChatGPT. Chat Notebooks defined the interface, but at the time, the actual content of chat cells was purely from external LLMs. Now in Version 14.2, chat cells are not limited to separate Chat Notebooks, but are available in any notebook. And by default they make use of the full Notebook Assistant technology stack, which goes far beyond a raw LLM. In addition, once you have a Notebook Assistant + LLM Kit subscription, you can seamlessly use chat cells; no account with external LLM providers is needed.

\n

The chat cell functionality in Version 14.2 inherits all the features of Chat Notebooks. For example, typing ~ in a new cell creates a chat break, that lets you start a “new conversation”. And when you use a chat cell, it’s able to see anything in your notebook up to the most recent chat break. (By the way, when you use Notebook Assistant through side chat it can also see what selection you’ve made in your “focus” notebook.)

\n

By default, chat cells are “talking” to the Notebook Assistant. But if you want, you can also use them to talk to external LLMs, just like in our original Chat Notebook—and there’s a convenient menu to set that up. Of course, if you’re using an external LLM, you don’t have all the technology that’s now in the Notebook Assistant, and unless you’re doing LLM research, you’ll typically find it much more useful and valuable to use chat cells in their default configuration—talking to the Notebook Assistant.

\n

Bring Us Your Gigabytes! Introducing Tabular

\n

Lists, associations, datasets. These are very flexible ways to represent structured collections of data in the Wolfram Language. But now in Version 14.2 there’s another: Tabular. Tabular provides a very streamlined and efficient way to handle tables of data laid out in rows and columns. And when we say “efficient” we mean that it can routinely juggle gigabytes of data or more, both in core and out of core.

\n

Let’s do an example. Let’s start off by importing some tabular data:

\n
\n
\n

\n

This is data on trees in New York City, 683,788 of them, each with 45 properties (sometimes missing). Tabular introduces a variety of new ideas. One of them is treating tabular columns much like variables. Here we’re using this to make a histogram of the values of the \"tree_dbh\" column in this Tabular:

\n
\n
\n

\n

You can think of a Tabular as being like an optimized form of a list of associations, where each row consists of an association whose keys are column names. Functions like Select then just work on Tabular:

\n
\n
\n

\n

Length gives the number of rows:

\n
\n
\n

\n

CountsBy treats the Tabular as a list of associations, extracting the value associated with the key \"spc_latin\" (“Latin species”) in each association, and counting how many times that value occurs (\"spc_latin\" here is short for #\"spc_latin\"&):

\n
\n
\n

\n

To get the names of the columns we can use the new function ColumnKeys:

\n
\n
\n

\n

Viewing Tabular as being like a list of associations we can extract parts—giving first a specification of rows, and then a specification of columns:

\n
\n
\n

\n

There are lots of new operations that we’ve been able to introduce now that we have Tabular. An example is AggregrateRows, which constructs a new Tabular from a given Tabular by aggregating groups of rows, in this case ones with the same value of \"spc_latin\", and then applying a function to those rows, in this case finding the mean value of \"tree_dbh\":

\n
\n
\n

\n

An operation like ReverseSortBy then “just works” on this table, here reverse sorting by the value of \"meandbh\":

\n
\n
\n

\n

Here we’re making an ordinary matrix out of a small slice of data from our Tabular:

\n
\n
\n

\n

And now we can plot the result, giving the positions of Virginia pine trees in New York City:

\n
\n
\n

\n

When should you use a Tabular, rather than, say a Dataset? Tabular is specifically set up for data that is arranged in rows and columns—and it supports many powerful operations that make sense for data in this “rectangular” form. Dataset is more general; it can have an arbitrary hierarchy of data dimensions, and so can’t in general support all the “rectangular” data operations of Tabular. In addition, by being specialized for “rectangular” data, Tabular can also be much more efficient, and indeed we’re making use of the latest type-specific methods for large-scale data handling.

\n

If you use TabularStructure you can see some of what lets Tabular be so efficient. Every column is treated as data of a specific type (and, yes, the types are consistent with the ones in the Wolfram Language compiler). And there’s streamlined treatment of missing data (with several new functions added specifically to handle this):

\n
\n
\n

\n

What we’ve seen so far is Tabular operating with “in-core” data. But you can quite transparently also use Tabular on out-of-core data, for example data stored in a relational database.

\n

Here’s an example of what this looks like:

\n
\n
\n

\n

It’s a tabular that points to a table in a relational database. It doesn’t by default explicitly display the data in the Tabular (and in fact it doesn’t even get it into memory—because it might be huge and might be changing quickly as well). But you can still specify operations just like on any other Tabular. This finds out what columns are there:

\n
\n
\n

\n

And this specifies an operation, giving the result as a symbolic out-of-core Tabular object:

\n
\n
\n

\n

You can “resolve” this, and get an explicit in-memory Tabular using ToMemory:

\n
\n
\n

\n

Manipulating Data in Tabular

\n

Let’s say you’ve got a Tabular—like this one based on penguins:

\n
\n
\n

\n

There are lots of operations you can do that manipulate the data in this Tabular in a structured way—giving you back another Tabular. For example, you could just take the last 2 rows of the Tabular:

\n
\n
\n

\n

Or you could sample 3 random rows:

\n
\n
\n

\n

Other operations depend on the actual content of the Tabular. And because you can treat each row like an association, you can set up functions that effectively refer to elements by their column names:

\n
\n
\n

\n

Note that we can always use #[name] to refer to elements in a column. If name is an alphanumeric string then we can also use the shorthand #name. And for other strings, we can use #\"name\". Some functions let you just use \"name\" to indicate the function #[\"name\"]:

\n
\n
\n

\n

So far we’ve talked only about arranging or selecting rows in a Tabular. What about columns? Here’s how we can construct a tabular that has just two of the columns from our original Tabular:

\n
\n
\n

\n

What if we don’t just want existing columns, but instead want new columns that are functions of these? ConstructColumns lets us define new columns, giving their names and the functions to be used to compute values in them:

\n
\n
\n

\n

(Note the trick of writing out Function to avoid having to put parentheses, as in \"species\"(StringTake[#species,1]&).)

\n

ConstructColumns lets you take an existing Tabular and construct a new one. TransformColumns lets you transform columns in an existing Tabular, here replacing species names by their first letters:

\n
\n
\n

\n

TransformColumns also lets you add new columns, specifying the content of the columns just like in ConstructColumns. But where does TransformColumns put your new columns? By default, they go at the end, after all existing columns. But if you specifically list an existing column, that’ll be used as a marker to determine where to put the new column (\"name\"Nothing removes a column):

\n
\n
\n

\n

Everything we’ve seen so far operates separately on each row of a Tabular. But what if we want to “gulp in” a whole column to use in our computation—say, for example, computing the mean of a whole column, then subtracting it from each value. ColumnwiseValue lets you do this, by supplying to the function (here Mean) a list of all the values in whatever column or columns you specify:

\n
\n
\n

\n

ColumnwiseValue effectively lets you compute a scalar value by applying a function to a whole column. There’s also ColumnwiseThread, which lets you compute a list of values, that will in effect be “threaded” into a column. Here we’re creating a column from a list of accumulated values:

\n
\n
\n

\n

By the way, as we’ll discuss below, if you’ve externally generated a list of values (of the right length) that you want to use as a column, you can do that directly by using InsertColumns.

\n

There’s another concept that’s very useful in practice in working with tabular data, and that’s grouping. In our penguin data, we’ve got an individual row for each penguin of each species. But what if we want instead to aggregate all the penguins of a given species, for example computing their average body mass? Well, we can do this with AggregateRows. AggregateRows works like ConstructColumns in the sense that you specify columns and their contents. But unlike ConstructColumns it creates new “aggregated” rows:

\n
\n
\n

\n

What is that first column here? The gray background of its entries indicates that it’s what we call a “key column”: a column whose entries (perhaps together with other key columns) can be used to reference rows. And later, we’ll see how you can use RowKey to indicate a row by giving a value from a key column:

\n
\n
\n

\n

But let’s go on with our aggregation efforts. Let’s say that we want to group not just by species, but also by island. Here’s how we can do that with AggregateRows:

\n
\n
\n

\n

In a sense what we have here is a table whose rows are specified by pairs of values (here “species” and “island”). But it’s often convenient to “pivot” things so that these values are used respectively for rows and for columns. And you can do that with PivotTable:

\n
\n
\n

\n

Note the —’s, which indicate missing values; apparently there are no Gentoo penguins on Dream island, etc.

\n

PivotTable normally gives exactly the same data as AggregateRows, but in a rearranged form. One additional feature of PivotTable is the option IncludeGroupAggregates which includes All entries that aggregate across each type of group:

\n
\n
\n

\n

If you have multiple functions that you’re computing, AggregateRows will just give them as separate columns:

\n
\n
\n

\n

PivotTable can also deal with multiple functions—by creating columns with “extended keys”:

\n
\n
\n

\n

And now you can use RowKey and ExtendedKey to refer to elements of the resulting Tabular:

\n
\n
\n

\n

Getting Data into Tabular

\n

We’ve seen some of the things you can do when you have data as a Tabular. But how does one get data into a Tabular? There are several ways. The first is just to convert from structures like lists and associations. The second is to import from a file, say a CSV or XLSX (or, for larger amounts of data, Parquet)—or from an external data store (S3, Dropbox, etc.). And the third is to connect to a database. You can also get data for Tabular directly from the Wolfram Knowledgebase or from the Wolfram Data Repository.

\n

Here’s how you can convert a list of lists into a Tabular:

\n
\n
\n

\n

And here’s how you can convert back:

\n
\n
\n

\n

It works with sparse arrays too, here instantly creating a million-row Tabular

\n
\n
\n

\n

that takes 80 MB to store:

\n
\n
\n

\n

Here’s what happens with a list of associations:

\n
\n
\n

\n

You can get the same Tabular by entering its data and its column names separately:

\n
\n
\n

\n

By the way, you can convert a Tabular to a Dataset

\n
\n
\n

\n

and in this simple case you can convert it back to a Tabular too:

\n
\n
\n

\n

In general, though, there are all sorts of options for how to convert lists, datasets, etc. to Tabular objects—and ToTabular is set up to let you control these. For example, you can use ToTabular to create a Tabular from columns rather than rows:

\n
\n
\n

\n

How about external data? In Version 14.2 Import now supports a \"Tabular\" element for tabular data formats. So, for example, given a CSV file

\n

CSV file

\n

Import can immediately import it as a Tabular:

\n
\n
\n

\n

This works very efficiently even for huge CSV files with millions of entries. It also does well at automatically identifying column names and headers. The same kind of thing works with more structured files, like ones from spreadsheets and statistical data formats. And it also works with modern columnar storage formats like Parquet, ORC and Arrow.

\n

Import transparently handles both ordinary files, and URLs (and URIs), requesting authentication if needed. In Version 14.2 we’re adding the new concept of DataConnectionObject, which provides a symbolic representation of remote data, essentially encapsulating all the details of how to get the data. So, for example, here’s a DataConnectionObject for an S3 bucket, whose contents we can immediately import:

\n
\n
\n

\n

(In Version 14.2 we’re supporting Amazon S3, Azure Blob Storage, Dropbox, IPFS—with many more to come. And we’re also planning support for data warehouse connections, APIs, etc.)

\n

But what about data that’s too big—or too fast-changing—to make sense to explicitly import? An important feature of Tabular (mentioned above) is that it can transparently handle external data, for example in relational databases.

\n

Here’s a reference to a large external database:

\n

RelationalDatabase

\n

This defines a Tabular that points to a table in the external database:

\n

tab = Tabular

\n

We can ask for the dimensions of the Tabular—and we see that it has 158 million rows:

\n

Dimensions

\n

The table we’re looking at happens to be all the line-oriented data in OpenStreetMap. Here are the first 3 rows and 10 columns:

\n

ToMemory

\n

Most operations on the Tabular will now actually get done in the external database. Here we’re asking to select rows whose “name” field contains \"Wolfram\":

\n

Select

\n

The actual computation is only done when we use ToMemory, and in this case (because there’s a lot of data in the database) it takes a little while. But soon we get the result, as a Tabular:

\n

ToMemory

\n

And we learn that there are 58 Wolfram-named items in the database:

\n

Length

\n

Another source of data for Tabular is the built-in Wolfram Knowledgebase. In Version 14.2 EntityValue supports direct output in Tabular form:

\n
\n
\n

\n

The Wolfram Knowledgebase provides lots of good examples of data for Tabular. And the same is true of the Wolfram Data Repository—where you can typically just apply Tabular to get data in Tabular form:

\n
\n
\n

\n

Cleaning Data for Tabular

\n

In many ways it’s the bane of data science. Yes, data is in digital form. But it’s not clean; it’s not computable. The Wolfram Language has long been a uniquely powerful tool for flexibly cleaning data (and, for example, for advancing through the ten levels of making data computable that I defined some years ago).

\n

But now, in Version 14.2, with Tabular, we have a whole new collection of streamlined capabilities for cleaning data. Let’s start by importing some data “from the wild” (and, actually, this example is cleaner than many):

\n
\n
\n

\n

(By the way, if there was really crazy stuff in the file, we might have wanted to use the option MissingValuePattern to specify a pattern that would just immediately replace the crazy stuff with Missing[].)

\n

OK, but let’s start by surveying what came in here from our file, using TabularStructure:

\n
\n
\n

\n

We see that Import successfully managed to identify the basic type of data in most of the columns—though for example it can’t tell if numbers are just numbers or are representing quantities with units, etc. And it also identifies that some number of entries in some columns are “missing”.

\n

As a first step in data cleaning, let’s get rid of what seems like an irrelevant \"id\" column:

\n
\n
\n

\n

Next, we see that the elements in the first column are being identified as strings—but they’re really dates, and they should be combined with the times in the second column. We can do this with TransformColumns, removing what’s now an “extra column” by replacing it with Nothing:

\n
\n
\n

\n

Looking at the various numerical columns, we see that they’re really quantities that should have units. But first, for convenience, let’s rename the last two columns:

\n
\n
\n

\n

Now let’s turn the numerical columns into columns of quantities with units, and, while we’re at it, also convert from °C to °F:

\n
\n
\n

\n

Here’s how we can now plot the temperature as a function of time:

\n
\n
\n

\n

There’s a lot of wiggling there. And looking at the data we see that we’re getting temperature values from several different weather stations. This selects data from a single station:

\n
\n
\n

\n

What’s the break in the curve? If we just scroll to that part of the tabular we’ll see that it’s because of missing data:

\n
\n
\n

\n

So what can we do about this? Well, there’s a powerful function TransformMissing that provides many options. Here we’re asking it to interpolate to fill in missing temperature values:

\n
\n
\n

\n

And now there are no gaps, but, slightly mysteriously, the whole plot extends further:

\n
\n
\n

\n

The reason is that it’s interpolating even in cases where basically nothing was measured. We can remove those rows using Discard:

\n
\n
\n

\n

And now we won’t have that “overhang” at the end:

\n
\n
\n

\n

Sometimes there’ll explicitly be data that’s missing; sometimes (more insidiously) the data will just be wrong. Let’s look at the histogram of pressure values for our data:

\n
\n
\n

\n

Oops. What are those small values? Presumably they’re wrong. (Perhaps they were transcription errors?) We can remove such “anomalous” values by using TransformAnomalies. Here we’re telling it to just completely trim out any row where the pressure was “anomalous”:

\n
\n
\n

\n

We can also get TransformAnomalies to try to “fix” the data. Here we’re just replacing any anomalous pressure by the previous pressure listed in the tabular:

\n
\n
\n

\n

You can also tell TransformAnomalies to “flag” any anomalous value and make it “missing”. But, if we’ve got missing values what then happens if we try to do computations on them? That’s where MissingFallback comes in. It’s fundamentally a very simple function—that just returns its first non-missing argument:

\n
\n
\n

\n

But even though it’s simple, it’s important in making it easy to handle missing values. So, for example, this computes a “northspeed”, falling back to 0 if data needed for the computation is missing:

\n
\n
\n

\n

The Structure of Tabular

\n

We’ve said that a Tabular is “like” a list of associations. And, indeed, if you apply Normal to it, that’s what you’ll get:

\n
\n
\n

\n

But internally Tabular is stored in a much more compact and efficient way. And it’s useful to know something about this, so you can manipulate Tabular objects without having to “take them apart” into things like lists and associations. Here’s our basic sample Tabular:

\n
\n
\n

\n

What happens if we extract a row? Well, we get a TabularRow object:

\n
\n
\n

\n

If we apply Normal, we get an association:

\n
\n
\n

\n

Here’s what happens if we instead extract a column:

\n
\n
\n

\n

Now Normal gives a list:

\n
\n
\n

\n

We can create a TabularColumn from a list:

\n
\n
\n

\n

Now we can use InsertColumns to insert a symbolic column like this into an existing Tabular (including the \"b\" tells InsertColumns to insert the new column after the “b” column):

\n
\n
\n

\n

But what actually is a Tabular inside? Let’s look at the example:

\n
\n
\n

\n

TabularStructure gives us a summary of the internal structure here:

\n
\n
\n

\n

The first thing to notice is that everything is stated in terms of columns, reflecting the fact that Tabular is a fundamentally column-oriented construct. And part of what makes Tabular so efficient is then that within a column everything is uniform, in the sense that all the values are the same type of data. In addition, for things like quantities and dates, we factor the data so that what’s actually stored internally in the column is just a list of numbers, with a single copy of “metadata information” on how to interpret them.

\n

And, yes, all this has a big effect. Like here’s the size in bytes of our New York trees Tabular from above:

\n
\n
\n

\n

But if we turn it into a list of associations using Normal, the result is about 14x larger:

\n
\n
\n

\n

OK, but what are those “column types” in the tabular structure? ColumnTypes gives a list of them:

\n
\n
\n

\n

These are low-level types of the kind used in the Wolfram Language compiler. And part of what knowing these does is that it immediately tells us what operations we can do on a particular column. And that’s useful both in low-level processing, and in things like knowing what kind of visualization might be possible.

\n

When Import imports data from something like a CSV file, it tries to infer what type each column is. But sometimes (as we mentioned above) you’ll want to “cast” a column to a different type, specifying the “destination type” using Wolfram Language type description. So, for example, this casts column “b” to a 32-bit real number, and column “c” to units of meters:

\n
\n
\n

\n

By the way, when a Tabular is displayed in a notebook, the column headers indicate the types of data in the corresponding columns. So in this case, there’s a little in the first column to indicate that it contains strings. Numbers and dates basically just “show what they are”. Quantities have their units indicated. And general symbolic expressions (like column “f” here) are indicated with . (If you hover over a column header, it gives you more detail about the types.)

\n

The next thing to discuss is missing data. Tabular always treats columns as being of a uniform type, but keeps an overall map of where values are missing. If you extract the column you’ll see a symbolic Missing:

\n
\n
\n

\n

But if you operate on the tabular column directly it’ll just behave as if the missing data is, well, missing:

\n
\n
\n

\n

By the way, if you’re bringing in data “from the wild”, Import will attempt to automatically infer the right type for each column. It knows how to deal with common anomalies in the input data, like NaN or null in a column of numbers. But if there are other weird things—like, say, notfound in the middle of a column of numbers—you can tell Import to turn such things into ordinary missing data by giving them as settings for the option MissingValuePattern.

\n

There are a couple more subtleties to discuss in connection with the structure of Tabular objects. The first is the notion of extended keys. Let’s say we have the following Tabular:

\n
\n
\n

\n

We can “pivot this to columns” so that the values x and y become column headers, but “under” the overall column header “value”:

\n
\n
\n

\n

But what is the structure of this Tabular? We can use ColumnKeys to find out:

\n
\n
\n

\n

You can now use these extended keys as indices for the Tabular:

\n
\n
\n

\n

In this particular case, because the “subkeys” \"x\" and \"y\" are unique, we can just use those, without including the other part of the extended key:

\n
\n
\n

\n

Our final subtlety (for now) is somewhat related. It concerns key columns. Normally the way we specify a row in a Tabular object is just by giving its position. But if the values of a particular column happen to be unique, then we can use those instead to specify a row. Consider this Tabular:

\n
\n
\n

\n

The fruit column has the feature that each entry appears only once—so we can create a Tabular that uses this column as a key column:

\n
\n
\n

\n

Notice that the numbers for rows have now disappeared, and the key column is indicated with a gray background. In this Tabular, you can then reference a particular row using for example RowKey:

\n
\n
\n

\n

Equivalently, you can also use an association with the column name:

\n
\n
\n

\n

What if the values in a single column are not sufficient to uniquely specify a row, but several columns together are? (In a real-world example, say one column has first names, and another has last names, and another has dates of birth.) Well, then you can designate all those columns as key columns:

\n
\n
\n

\n

And once you’ve done that, you can reference a row by giving the values in all the key columns:

\n
\n
\n

\n

Tabular Everywhere

\n

Tabular provides an important new way to represent structured data in the Wolfram Language. It’s powerful in its own right, but what makes it even more powerful is how it integrates with all the other capabilities in the Wolfram Language. Many functions just immediately work with Tabular. But in Version 14.2 hundreds have been enhanced to make use of the special features of Tabular.

\n

Most often, it’s to be able to operate directly on columns in a Tabular. So, for example, given the Tabular

\n
\n
\n

\n

we can immediately make a visualization based on two of the columns:

\n
\n
\n

\n

If one of the columns has categorical data, we’ll recognize that, and plot it accordingly:

\n
\n
\n

\n

Another area where Tabular can immediately be used is machine learning. So, for example, this creates a classifier function that will attempt to determine the species of a penguin from other data about it:

\n
\n
\n

\n

Now we can use this classifier function to predict species from other data about a penguin:

\n
\n
\n

\n

We can also take the whole Tabular and make a feature space plot, labeling with species:

\n
\n
\n

\n

Or we could “learn the distribution of possible penguins”

\n
\n
\n

\n

and randomly generate 3 “fictitious penguins” from this distribution:

\n
\n
\n

\n

Algebra with Symbolic Arrays

\n

One of the major innovations of Version 14.1 was the introduction of symbolic arrays—and the ability to create expressions involving vector, matrix and array variables, and to take derivatives of them. In Version 14.2 we’re taking the idea of computing with symbolic arrays a step further—for the first time systematically automating what has in the past been the manual process of doing algebra with symbolic arrays, and simplifying expressions involving symbolic arrays.

\n

Let’s start by talking about ArrayExpand. Our longstanding function Expand just deals with expanding ordinary multiplication, effectively of scalars—so in this case it does nothing:

\n
\n
\n

\n

But in Version 14.2 we also have ArrayExpand which will do the expansion:

\n
\n
\n

\n

ArrayExpand deals with many generalizations of multiplication that aren’t commutative:

\n
\n
\n

\n

In an example like this, we really don’t need to know anything about a and b. But sometimes we can’t do the expansion without, for example, knowing their dimensions. One way to specify those dimensions is as a condition in ArrayExpand:

\n
\n
\n

\n

An alternative is to use an explicit symbolic array variable:

\n
\n
\n

\n

In addition to expanding generalized products using ArrayExpand, Version 14.2 also supports general simplification of symbolic array expressions:

\n
\n
\n

\n

The function ArraySimplify will specifically do simplification on symbolic arrays, while leaving other parts of expressions unchanged. Version 14.2 supports many kinds of array simplifications:

\n
\n
\n

\n
\n
\n

\n

We could do these simplifications without knowing anything about the dimensions of a and b. But sometimes we can’t go as far without knowing these. For example, if we don’t know the dimensions we get:

\n
\n
\n

\n

But with the dimensions we can explicitly simplify this to an n×n identity matrix:

\n
\n
\n

\n

ArraySimplify can also take account of the symmetries of arrays. For example, let’s set up a symbolic symmetric matrix:

\n
\n
\n

\n

And now ArraySimplify can immediately resolve this:

\n
\n
\n

\n

The ability to do algebraic operations on complete arrays in symbolic form is very powerful. But sometimes it’s also important to look at individual components of arrays. And in Version 14.2 we’ve added ComponentExpand to let you get components of arrays in symbolic form.

\n

So, for example this takes a 2-component vector and writes it out as an explicit list with two symbolic components:

\n
\n
\n

\n

Underneath, those components are represented using Indexed:

\n
\n
\n

\n

Here’s the determinant of a 3×3 matrix, written out in terms of symbolic components:

\n
\n
\n

\n

And here’s a matrix power:

\n
\n
\n

\n

Given 3D vectors and we can also for example form the cross product

\n
\n
\n

\n

and we can then go ahead and dot it into an inverse matrix:

\n
\n
\n

\n

Language Tune-Ups

\n

As a daily user of the Wolfram Language I’m very pleased with how smoothly I find I can translate computational ideas into code. But the more we’ve made it easy to do, the more we can see new places where we can polish the language further. And in Version 14.2—like every version before it—we’ve added a number of “language tune-ups”.

\n

A simple one—whose utility becomes particularly clear with Tabular—is Discard. You can think of it as a complement to Select: it discards elements according to the criterion you specify:

\n
\n
\n

\n

And along with adding Discard, we’ve also enhanced Select. Normally, Select just gives a list of the elements it selects. But in Version 14.2 you can specify other results. Here we’re asking for the “index” (i.e. position) of the elements that NumberQ is selecting:

\n
\n
\n

\n

Something that can be helpful in dealing with very large amounts of data is getting a bit vector data structure from Select (and Discard), that provides a bit mask of which elements are selected or not:

\n
\n
\n

\n

By the way, here’s how you can ask for multiple results from Select and Discard:

\n
\n
\n

\n

In talking about Tabular we already mentioned MissingFallback. Another function related to code robustification and error handling is the new function Failsafe. Let’s say you’ve got a list which contains some “failed” elements. If you map a function f over that list, it’ll apply itself to the failure elements just as to everything else:

\n
\n
\n

\n

But quite possibly f wasn’t set up to deal with these kinds of failure inputs. And that’s where Failsafe comes in. Because Failsafe[f][x] is defined to give f[x] if x is not a failure, and to just return the failure if it is. So now we can map f across our list with impunity, knowing it’ll never be fed failure input:

\n
\n
\n

\n

Talking of tricky error cases, another new function in Version 14.2 is HoldCompleteForm. HoldForm lets you display an expression without doing ordinary evaluation of the expression. But—like Hold—it still allows certain transformations to get made. HoldCompleteForm—like HoldComplete—prevents all these transformations. So while HoldForm gets a bit confused here when the sequence “resolves”

\n
\n
\n

\n

HoldCompleteForm just completely holds and displays the sequence:

\n
\n
\n

\n

Another piece of polish added in Version 14.2 concerns Counts. I often find myself wanting to count elements in a list, including getting 0 when a certain element is missing. By default, Counts just counts elements that are present:

\n
\n
\n

\n

But in Version 14.2 we’ve added a second argument that lets you give a complete list of all the elements you want to count—even if they happen to be absent from the list:

\n
\n
\n

\n

As a final example of language tune-up in Version 14.2 I’ll mention AssociationComap. In Version 14.0 we introduced Comap as a “co-” (as in “co-functor”, etc.) analog of Map:

\n
\n
\n

\n

In Version 14.2 we’re introducing AssociationComap—the “co-” version of AssociationMap:

\n
\n
\n

\n

Think of it as a nice way to make labeled tables of things, as in:

\n
\n
\n

\n

Brightening Our Colors; Spiffing Up for 2025

\n

In 2014—for Version 10.0—we did a major overhaul of the default colors for all our graphics and visualization functions, coming up with what we felt was a good solution. (And as we’ve just noticed, somewhat bizarrely, it turned out that in the years that followed, many of the graphics and visualization libraries out there seemed to copy what we did!) Well, a decade has now passed, visual expectations (and display technologies) have changed, and we decided it was time to spiff up our colors for 2025.

\n

Here’s what a typical plot looked like in Versions 10.0 through 14.1:

\n
\n
\n

\n

And here’s the same plot in Version 14.2:

\n
\n
\n

\n

By design, it’s still completely recognizable, but it’s got a little extra zing to it.

\n

With more curves, there are more colors. Here’s the old version:

\n
\n
\n

\n

And here’s the new version:

\n
\n
\n

\n

Histograms are brighter too. The old:

\n
\n
\n

\n

And the new:

\n
\n
\n

\n

Here’s the comparison between old (“2014”) and new (“2025”) colors:

\n
\n
\n

\n

It’s subtle, but it makes a difference. I have to say that increasingly over the past few years, I’ve felt I had to tweak the colors in almost every Wolfram Language image I’ve published. But I’m excited to say that with the new colors that urge has gone away—and I can just use our default colors again!

\n

LLM Streamlining & Streaming

\n

We first introduced programmatic access to LLMs in Wolfram Language in the middle of 2023, with functions like LLMFunction and LLMSynthesize. At that time, these functions needed access to external LLM services. But with the release last month of LLM Kit (along with Wolfram Notebook Assistant) we’ve made these functions seamlessly available for everyone with a Notebook Assistant + LLM Kit subscription. Once you have your subscription, you can use programmatic LLM functions anywhere and everywhere in Version 14.2 without any further set up.

\n

There are also two new functions: LLMSynthesizeSubmit and ChatSubmit. Both are concerned with letting you get incremental results from LLMs (and, yes, that’s important, at least for now, because LLMs can be quite slow). Like CloudSubmit and URLSubmit, LLMSynthesizeSubmit and ChatSubmit are asynchronous functions: you call them to start something that will call an appropriate handler function whenever a certain specified event occurs.

\n

Both LLMSynthesizeSubmit and ChatSubmit support a whole variety of events. An example is \"ContentChunkReceived\": an event that occurs when there’s a chunk of content received from the LLM.

\n

Here’s how one can use that:

\n
\n
\n

\n

The LLMSynthesizeSubmit returns a TaskObject, but then starts to synthesize text in response to the prompt you’ve given, calling the handler function you specified every time a chunk of text comes in. After a few moments, the LLM will have finished its process of synthesizing text, and if you ask for the value of c you’ll see each of the chunks it produced:

\n
\n
\n

\n

Let’s try this again, but now setting up a dynamic display for a string s and then running LLMSynthesizeSubmit to accumulate the synthesized text into this string:

\n
\n
\n

\n

ChatSubmit is the analog of ChatEvaluate, but asynchronous—and you can use it to create a full chat experience, in which content is streaming into your notebook as soon as the LLM (or tools called by the LLM) generate it.

\n

Streamlining Parallel Computation: Launch All the Machines!

\n

For nearly 20 years we’ve had a streamlined capability to do parallel computation in Wolfram Language, using functions like ParallelMap, ParallelTable and Parallelize. The parallel computation can happen on multiple cores on a single machine, or across many machines on a network. (And, for example, in my own current setup I have 7 machines right now with a total of 204 cores.)

\n

In the past few years, partly responding to the increasing number of cores typically available on individual machines, we’ve been progressively streamlining the way that parallel computation is provisioned. And in Version 14.2 we’ve, yes, parallelized the provisioning of parallel computation. Which means, for example, that my 7 machines all start their parallel kernels in parallel—so that the whole process is now finished in a matter of seconds, rather than potentially taking minutes, as it did before:

\n
\n
\n

\n

\n

Another new feature for parallel computation in Version 14.2 is the ability to automatically parallelize across multiple variables in ParallelTable. ParallelTable has always had a variety of algorithms for optimizing the way it splits up computations for different kernels. Now that’s been extended so that it can deal with multiple variables:

\n
\n
\n

\n

\n

As someone who very regularly does large-scale computations with the Wolfram Language it’s hard to overstate how seamlessly important its parallel computation capabilities have been to me. Usually I’ll first figure out a computation with Map, Table, etc. Then when I’m ready to do the full version I’ll swap in ParallelMap, ParallelTable, etc. And it’s remarkable how much difference a 200x increase in speed makes (assuming my computation doesn’t have too much communication overhead).

\n

(By the way, talking of communication overhead, two new functions in Version 14.2 are ParallelSelect and ParallelCases, which allow you to select and find cases in lists in parallel, saving communication overhead by sending only final results back to the master kernel. This functionality has actually been available for a while through Parallelize[ Select[ ] ] etc., but it’s streamlined in Version 14.2.)

\n

Follow that ____! Tracking in Video

\n

Let’s say we’ve got a video, for example of people walking through a train station. We’ve had the capability for some time to take a single frame of such a video, and find the people in it. But in Version 14.2 we’ve got something new: the capability to track objects that move around between frames of the video.

\n

Let’s start with a video:

\n
\n

\n
\n
\n

\n

We could take an individual frame, and find image bounding boxes. But as of Version 14.2 we can just apply ImageBoundingBoxes to the whole video at once:

\n
\n
\n
\n

\n

Then we can apply the data on bounding boxes to highlight people in the video—using the new HighlightVideo function:

\n
\n

\n
\n\n
\n

\n

But this just separately indicates where people are in each frame; it doesn’t connect them from one frame to another. In Version 14.2 we’ve added VideoObjectTracking to follow objects between frames:

\n
\n
\n

\n

Now if we use HighlightVideo, different objects will be annotated with different colors:

\n
\n

\n
\n\n
\n

\n

This picks out all the unique objects identified in the course of the video, and counts them:

\n
\n
\n

\n

“Where’s the dog?”, you might ask. It’s certainly not there for long:

\n
\n
\n

\n

And if we find the first frame where it is supposed to appear it does seem as if what’s presumably a person on the lower right has been mistaken for a dog:

\n
\n
\n

\n

And, yup, that’s what it thought was a dog:

\n
\n
\n

\n

Game Theory

\n

“What about game theory?”, people have long asked. And, yes, there has been lots of game theory done with the Wolfram Language, and lots of packages written for particular aspects of it. But in Version 14.2 we’re finally introducing built-in system functions for doing game theory (both matrix games and tree games).

\n

Here’s how we specify a (zero-sum) 2-player matrix game:

\n
\n
\n

\n

This defines payoffs when each player takes each action. We can represent this by a dataset:

\n
\n
\n

\n

An alternative is to “plot the game” using MatrixGamePlot:

\n
\n
\n

\n

OK, so how can we “solve” this game? In other words, what action should each player take, with what probability, to maximize their average payoff over many instances of the game? (It’s assumed that in each instance the players simultaneously and independently choose their actions.) A “solution” that maximizes expected payoffs for all players is called a Nash equilibrium. (As a small footnote to history, John Nash was a long-time user of Mathematica and what’s now the Wolfram Language—though many years after he came up with the concept of Nash equilibrium.) Well, now in Version 14.2, FindMatrixGameStrategies computes optimal strategies (AKA Nash equilibria) for matrix games:

\n
\n
\n

\n

This result means that for this game player 1 should play action 1 with probability and action 2 with probability , and player 2 should do these with probabilities and . But what are their expected payoffs? MatrixGamePayoff computes that:

\n
\n
\n

\n

It can get pretty hard to keep track of the different cases in a game, so MatrixGame lets you give whatever labels you want for players and actions:

\n
\n
\n

\n

These labels are then used in visualizations:

\n
\n
\n

\n

What we just showed is actually a standard example game—the “prisoner’s dilemma”. In the Wolfram Language we now have GameTheoryData as a repository of about 50 standard games. Here’s one, specified to have 4 players:

\n
\n
\n

\n

And it’s less trivial to solve this game, but here’s the result—with 27 distinct solutions:

\n
\n
\n

\n

And, yes, the visualizations keep on working, even when there are more players (here we’re showing the 5-player case, indicating the 50th game solution):

\n
\n
\n

\n

It might be worth mentioning that the way we’re solving these kinds of games is by using our latest polynomial equation solving capabilities—and not only are we able to routinely find all possible Nash equilibria (not just a single fixed point), but we’re also able to get exact results:

\n
\n
\n

\n

In addition to matrix games, which model games in which players simultaneously pick their actions just once, we’re also supporting tree games, in which players take turns, producing a tree of possible outcomes, ending with a specified payoff for each of the players. Here’s an example of a very simple tree game:

\n
\n
\n

\n

We can get at least one solution to this game—described by a nested structure that gives the optimal probabilities for each action of each player at each turn:

\n
\n
\n

\n

Things with tree games can get more elaborate. Here’s an example—in which other players sometimes don’t know which branches were taken (as indicated by states joined by dashed lines):

\n
\n
\n

\n

What we’ve got in Version 14.2 represents rather complete coverage of the basic concepts in a typical introductory game theory course. But now, in typical Wolfram Language fashion, it’s all computable and extensible—so you can study more realistic games, and quickly do lots of examples to build intuition.

\n

We’ve so far concentrated on “classic game theory”, notably with the feature (relevant to many current applications) that all action nodes are the result of a different sequence of actions. However, games like tic-tac-toe (that I happened to recently study using multiway graphs) can be simplified by merging equivalent action nodes. Multiple sequences of actions may lead to the same game of tic-tac-toe, as is often the case for iterated games. These graph structures don’t fit into the kind of classic game theory trees we’ve introduced in Version 14.2—though (as my own efforts I think demonstrate) they’re uniquely amenable to analysis with the Wolfram Language.

\n

Computing the Syzygies, and Other Advances in Astronomy

\n

There are lots of “coincidences” in astronomy—situations where things line up in a particular way. Eclipses are one example. But there are many more. And in Version 14.2 there’s now a general function FindAstroEvent for finding these “coincidences”, technically called syzygies (“sizz-ee-gees”), as well as other “special configurations” of astronomical objects.

\n

A simple example is the September (autumnal) equinox:

\n
\n
\n

\n

Roughly this is when day and night are of equal length. More precisely, it’s when the sun is at one of the two positions in the sky where the plane of the ecliptic (i.e. the orbital plane of the earth around the sun) crosses the celestial equator (i.e. the projection of the earth’s equator)—as we can see here (the ecliptic is the yellow line; the celestial equator the blue one):

\n
\n
\n

\n

As another example, let’s find the next time over the next century when Jupiter and Saturn will be closest in the sky:

\n
\n
\n

\n

They’ll get close enough to see their moons together:

\n
\n
\n

\n

There are an incredible number of astronomical configurations that have historically been given special names. There are equinoxes, solstices, equiluxes, culminations, conjunctions, oppositions, quadratures—as well as periapses and apoapses (specialized to perigee, perihelion, periareion, perijove, perikrone, periuranion, periposeideum, etc.). In Version 14.2 we support all these.

\n

So, for example, this gives the next time Triton will be closest to Neptune:

\n
\n
\n

\n

A famous example has to do with the perihelion (closest approach to the Sun) of Mercury. Let’s compute the position of Mercury (as seen from the Sun) at all its perihelia in the first couple of decades of the nineteenth century:

\n
\n
\n

\n

We see that there’s a systematic “advance” (along with some wiggling):

\n
\n
\n

\n

So now let’s quantitatively compute this advance. We start by finding the times for the first perihelia in 1800 and 1900:

\n
\n
\n

\n

Now we compute the angular separation between the positions of Mercury at these times:

\n
\n
\n

\n

Then divide this by the time difference

\n
\n
\n

\n

and convert units:

\n
\n
\n

\n

Famously, 43 arcseconds per century of this is the result of deviations from the inverse square law of gravity introduced by general relativity—and, of course, accounted for by our astronomical computation system. (The rest of the advance is the result of traditional gravitational effects from Venus, Jupiter, Earth, etc.)

\n

PDEs Now Also for Magnetic Systems

\n

More than a decade and a half ago we made the commitment to make the Wolfram Language a full strength PDE modeling environment. Of course it helped that we could rely on all the other capabilities of the Wolfram Language—and what we’ve been able to produce is immeasurably more valuable because of its synergy with the rest of the system. But over the years, with great effort, we’ve been steadily building up symbolic PDE modeling capabilities across all the standard domains. And at this point I think it’s fair to say that we can handle—at an industrial scale—a large part of the PDE modeling that arises in real-world situations.

\n

But there are always more cases for which we can build in capabilities, and in Version 14.2 we’re adding built-in modeling primitives for static and quasistatic magnetic fields. So, for example, here’s how we can now model an hourglass-shaped magnet. This defines boundary conditions—then solves the equations for the magnetic scalar potential:

\n
\n
\n

\n

We can then take that result, and, for example, immediately plot the magnetic field lines it implies:

\n
\n
\n

\n

Version 14.2 also adds the primitives to deal with slowly varying electric currents, and the magnetic fields they generate. All of this immediately integrates with our other modeling domains like heat transfer, fluid dynamics, acoustics, etc.

\n

There’s much to say about PDE modeling and its applications, and in Version 14.2 we’ve added more than 200 pages of additional textbook-style documentation about PDE modeling, including some research-level examples.

\n

New Features in Graphics, Geometry & Graphs

\n

Graphics has always been a strong area for the Wolfram Language, and over the past decade we’ve also built up very strong computational geometry capabilities. Version 14.2 adds some more “icing on the cake”, particularly in connecting graphics to geometry, and connecting geometry to other parts of the system.

\n

As an example, Version 14.2 adds geometry capabilities for more of what were previously just graphics primitives. For example, this is a geometric region formed by filling a Bezier curve:

\n
\n
\n

\n

And we can now do all our usual computational geometry operations on it:

\n
\n
\n

\n

Something like this now works too:

\n
\n
\n

\n

Something else new in Version 14.2 is MoleculeMesh, which lets you build computable geometry from molecular structures. Here’s a graphical rendering of a molecule:

\n
\n
\n

\n

And here now is a geometric mesh corresponding to the molecule:

\n
\n
\n

\n

We can then do computational geometry on this mesh:

\n
\n
\n

\n
\n
\n

\n

Another new feature in Version 14.2 is an additional method for graph drawing that can make use of symmetries. If you make a layered graph from a symmetrical grid, it won’t immediately render in a symmetrical way:

\n
\n
\n

\n

But with the new \"SymmetricLayeredEmbedding\" graph layout, it will:

\n
\n
\n

\n

User Interface Tune-Ups

\n

Making a great user interface is always a story of continued polishing, and we’ve now been doing that for the notebook interface for nearly four decades. In Version 14.2 there are several notable pieces of polish that have been added. One concerns autocompletion for option values.

\n

We’ve long shown completions for options that have a discrete collection of definite common settings (such as All, Automatic, etc.). In Version 14.2 we’re adding “template completions” that give the structure of settings, and then let you tab through to fill in particular values. In all these years, one of the places I pretty much always find myself going to in the documentation is the settings for FrameLabel. But now autocompletion immediately shows me the structure of these settings:

\n

Interface settings autocompletion

\n

Also in autocompletion, we’ve added the capability to autocomplete context names, context aliases, and symbols that include contexts. And in all cases, the autocompletion is “fuzzy” in the sense that it’ll trigger not only on characters at the beginning of a name but on ones anywhere in the name—which means that you can just type characters in the name of a symbol, and relevant contexts will appear as autocompletions.

\n

Another small convenience added in Version 14.2 is the ability to drag images from one notebook to any other notebook, or, for that matter, to any other application that can accept dragged images. It’s been possible to drag images from other applications into notebooks, but now you can do it the other way too.

\n

Something else that’s for now specific to macOS is enhanced support for icon preview (as well as Quick Look). So now if you have a folder full of notebooks and you select Icon view, you’ll see a little representation of each notebook as an icon of its content:

\n

Notebook icon preview

\n

Under the hood in Version 14.2 there are also some infrastructural developments that will enable significant new features in subsequent versions. Some of these involve generalized support for dark mode. (Yes, one might initially imagine that dark mode would somehow be trivial, but when you start thinking about all the graphics and interface elements that involve colors, it’s clear it’s not. Though, for example, after significant effort we did recently release dark mode for Wolfram|Alpha.)

\n

So, for example, in Version 14.2 you’ll find the new symbol LightDarkSwitched, which is part of the mechanism for specifying styles that will automatically switch for light and dark modes. And, yes, there is a style option LightDark that will switch modes for notebooks—and which is at least experimentally supported.

\n

Related to light/dark mode is also the notion of theme colors: colors that are defined symbolically and can be switched together. And, yes, there’s an experimental symbol ThemeColor related to these. But the full deployment of this whole mechanism won’t be there until the next version.

\n

The Beginnings of Going Native on GPUs

\n

Many important pieces of functionality inside the Wolfram Language automatically make use of GPUs when they are available. And already 15 years ago we introduced primitives for low-level GPU programming. But in Version 14.2 we’re beginning the process of making GPU capabilities more readily available as a way to optimize general Wolfram Language usage. The key new construct is GPUArray, which represents an array of data that will (if possible) be stored so as to be immediately and directly accessible to your GPU. (On some systems, it will be stored in separate “GPU memory”; on others, such as modern Macs, it will be stored in shared memory in such a way as to be directly accessible by the GPU.)

\n

In Version 14.2 we’re supporting an initial set of operations that can be performed directly on GPU arrays. The operations available vary slightly from one type of GPU to another. Over time, we expect to use or create many additional GPU libraries that will extend the set of operations that can be performed on GPU arrays.

\n

Here is a random ten-million-element vector stored as a GPU array:

\n
\n
\n

\n

The GPU on the Mac on which I am writing this supports the necessary operations to do this purely in its GPU, giving back a GPUArray result:

\n
\n
\n

\n

Here’s the timing:

\n
\n
\n

\n

And here’s the corresponding ordinary (CPU) result:

\n
\n
\n

\n

In this case, the GPUArray result is about a factor of 2 faster. What factor you get will vary with the operations you’re doing, and the particular hardware you’re using. So far, the largest factors I’ve seen are around 10x. But as we build more GPU libraries, I expect this to increase—particularly when what you’re doing involves a lot of compute “inside the GPU”, and not too much memory access.

\n

By the way, if you sprinkle GPUArray in your code it’ll normally never affect the results you get—because operations always default to running on your CPU if they’re not supported on your GPU. (Usually GPUArray will make things faster, but if there are too many “GPU misses” then all the “attempts to move data” may actually slow things down.) It’s worth realizing, though, that GPU computation is still not at all well standardized or uniform. Sometimes there may only be support for vectors, sometimes also matrices—and there may be different data types with different numerical precision supported in different cases.

\n

And Even More…

\n

In addition to all the things we’ve discussed here so far, there are also a host of other “little” new features in Version 14.2. But even though they may be “little” compared to other things we’ve discussed, they’ll be big if you happen to need just that functionality.

\n

For example, there’s MidDate—that computes the midpoint of dates:

\n
\n
\n

\n

And like almost everything involving dates, MidDate is full of subtleties. Here it’s computing the week 2/3 of the way through this year:

\n
\n
\n

\n

In math, functions like DSolve and SurfaceIntegrate can now deal with symbolic array variables:

\n
\n
\n

\n

SumConvergence now lets one specify the range of summation, and can give conditions that depend on it:

\n
\n
\n

\n

A little convenience that, yes, I asked for, is that DigitCount now lets you specify how many digits altogether you want to assume your number has, so that it appropriately counts leading 0s:

\n
\n
\n

\n

Talking of conveniences, for functions like MaximalBy and TakeLargest we added a new argument that says how to sort elements to determine “the largest”. Here’s the default numerical order

\n
\n
\n

\n

and here’s what happens if we use “symbolic order” instead:

\n
\n
\n

\n

There are always so many details to polish. Like in Version 14.2 there’s an update to MoonPhase and related functions, both new things to ask about, and new methods to compute them:

\n
\n
\n

\n
\n
\n

\n

In another area, in addition to major new import/export formats (particularly to support Tabular) there’s an update to \"Markdown\" import that gives results in plaintext, and there’s an update to \"PDF\" import that gives a mixed list of text and images.

\n

And there are lots of other things too, as you can find in the “Summary of New and Improved Features in 14.2”. By the way, it’s worth mentioning that if you’re looking at a particular documentation page for a function, you can always find out what’s new in this version just by pressing show changes:

\n

Show changes

\n

\n\n

\nDownload your 14.2 now! »  (It’s already live in the Wolfram Cloud!)\n
\n", + "category": "Data Science", + "link": "https://writings.stephenwolfram.com/2025/01/launching-version-14-2-of-wolfram-language-mathematica-big-data-meets-computation-ai/", + "creator": "Stephen Wolfram", + "pubDate": "Thu, 23 Jan 2025 19:00:09 +0000", + "enclosure": "https://content.wolfram.com/sites/43/2025/01/sw012225followthatimg1a.mp4", + "enclosureType": "video/mp4", + "image": "https://content.wolfram.com/sites/43/2025/01/sw012225followthatimg1a.mp4", + "id": "", + "language": "en", + "folder": "", + "feed": "wolfram", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "2a8d9a60673d3e92c76bdb176116f494", + "highlights": [] + }, + { + "title": "Who Can Understand the Proof? A Window on Formalized Mathematics", + "description": "\"\"Related writings: “Logic, Explainability and the Future of Understanding” (2018) » “The Physicalization of Metamathematics and Its Implications for the Foundations of Mathematics” (2022) » “Computational Knowledge and the Future of Pure Mathematics” (2014) » The Simplest Axiom for Logic Theorem (Wolfram with Mathematica, 2000): The single axiom ((a•b)•c)•(a•((a•c)•a))c is a complete axiom system for Boolean algebra (and […]", + "content": "\"\"
Related writings:
\n“Logic, Explainability and the Future of Understanding” (2018) »
\n“The Physicalization of Metamathematics and Its Implications for the Foundations of Mathematics” (2022) »
\n“Computational Knowledge and the Future of Pure Mathematics” (2014) »\n
\n

The Simplest Axiom for Logic

\n\n\n\n

Theorem (Wolfram with Mathematica, 2000):
The single axiom ((ab)•c)•(a•((ac)•a))c is a complete axiom system for Boolean algebra (and is the simplest possible)

\n

For more than a century people had wondered how simple the axioms of logic (Boolean algebra) could be. On January 29, 2000, I found the answer—and made the surprising discovery that they could be about twice as simple as anyone knew. (I also showed that what I found was the simplest possible.)

\n

It was an interesting result—that gave new intuition about just how simple the foundations of things can be, and for example helped inspire my efforts to find a simple underlying theory of physics.

\n

But how did I get the result? Well, I used automated theorem proving (specifically, what’s now FindEquationalProof in Wolfram Language). Automated theorem proving is something that’s been around since at least the 1950s, and its core methods haven’t changed in a long time. But in the rare cases it’s been used in mathematics it’s typically been to confirm things that were already believed to be true. And in fact, to my knowledge, my Boolean algebra axiom is actually the only truly unexpected result that’s ever been found for the first time using automated theorem proving.

\n

But, OK, so we know it’s true. And that’s interesting. But what about the proof? Does the proof, for example, show us why the result is true? Well, actually, in a quarter of a century, nobody (including me) has ever made much headway at all in understanding the proof (which, at least in the form we currently know it, is long and complicated). So is that basically inevitable—say as a consequence of computational irreducibility? Or is there some way—perhaps using modern AI—to “humanize” the proof to a point where one can understand it?

\n

It is, I think, an interesting challenge—that gets at the heart of what one can (and can’t) expect to achieve with formalized mathematics. In what follows, I’ll discuss what I’ve been able to figure out—and how it relates to foundational questions about what mathematics is and how it can be done. And while I think I’ve been able to clarify some of the issues, the core problem is still out there—and I’d like to issue it here as a challenge:

\n

Challenge: Understand the proof of the Theorem

\n

What do I mean by “understand”? Inevitably, “understand” has to be defined in human terms. Something like “so a human can follow and reproduce it”—and, with luck, feel like saying “aha!” at some point, the kind of way they might on hearing a proof of the Pythagorean theorem (or, in logic, something like de Morgan’s law Not[And[p, q]]Or[Not[p], Not[q]]).

\n

It should be said that it’s certainly not clear that such an understanding would ever be possible. After all, as we’ll discuss, it’s a basic metamathematical fact that out of all possible theorems almost none have short proofs, at least in terms of any particular way of stating the proofs. But what about an “interesting theorem” like the one we’re considering here? Maybe that’s different. Or maybe, at least, there’s some way of building out a “higher-level mathematical narrative” for a theorem like this that will take one through the proof in human-accessible steps.

\n

In principle one could always imagine a somewhat bizarre scenario in which people would just rote learn chunks of the proof, perhaps giving each chunk some name (a bit like how people learned bArbArA and cElArEnt syllogisms in the Middle Ages). And in terms of these chunks there’d presumably then be a “human way” to talk about the proof. But learning the chunks—other than as some kind of recreational or devotional activity—doesn’t seem to make much sense unless there’s metamathematical structure that somehow connects the chunks to “general concepts” that are widely useful elsewhere.

\n

But of course it’s still conceivable that there might be a “big theory” that would lead us to the theorem in an “understandable way”. And that could be a traditional mathematical theory, built up with precise, if potentially very abstract, constructs. But what about something more like a theory in natural science? In which we might treat our automatically generated proof as an object for empirical study—exploring its characteristics, trying to get intuition about it, and ultimately trying to deduce the analog of “natural laws” that give us a “human-level” way of understanding it.

\n

Of course, for many purposes it doesn’t really matter why the theorem is true. All that matters is that it is true, and that one can deduce things on the basis of it. But as one thinks about the future of mathematics, and the future of doing mathematics, it’s interesting to explore to what extent it might or might not ultimately be possible to understand in a human-accessible way the kind of seemingly alien result that the theorem represents.

\n

The Proof as We Know It

\n

I first presented a version of the proof on two pages of my 2002 book A New Kind of Science, printing it in 4-point type to make it fit:

\n

Axiom proof

\n

Today, generating a very similar proof is a one-liner in Wolfram Language (as we’ll discuss below, the · dot here can be thought of as representing the Nand operation):

\n
\n
\n

\n

The proof involves 307 (mostly rather elaborate) steps. And here’s one page of it (out of about 30)—presented in the form of a computable Wolfram Language dataset:

\n
\n
Example proof steps page
\n

\n

What’s the basic idea of this proof? Essentially it’s to perform a sequence of purely structural symbolic operations that go from our axiom to known axioms of Boolean algebra. And the proof does this by proving a series of lemmas which can be combined to eventually give what we want:

\n
\n
\n

\n

The highlighted “targets” here are the standard Sheffer axioms for Boolean algebra from 1913:

\n
\n
\n

\n

And, yes, even though these are quite short, the intermediate lemmas involved in the proof get quite long—the longest involving 60 symbols (i.e. having LeafCount 60):

\n
\n
\n

\n

It’s as if to get to where it’s going, the proof ends up having to go through the wilds of metamathematical space. And indeed one gets a sense of this if one plots the sizes (i.e. LeafCount) of successive lemmas:

\n
\n
\n

\n

Here’s the distribution of these sizes, showing that while they’re often small, there’s a long tail (note, by the way, that if dot · appears n times in a lemma, the LeafCount will be 2n + 3):

\n
\n
\n

\n

So how are these lemmas related? Here’s a graph of their interdependence (with the size of each dot being proportional to the size of the lemma it represents):

\n
\n
\n

\n

Zooming in on the top we see more detail:

\n
\n
\n

\n

We start from our axiom, then derive a whole sequence of lemmas—as we’ll see later, always combining two lemmas to create a new one. (And, yes, we could equally well call these things theorems—but we generate so many of them it seems more natural to call them “lemmas”.)

\n

So, OK, we’ve got a complicated proof. But how can we check that it’s correct? Well, from the symbolic representation of the proof in the Wolfram Language we can immediately generate a “proof function” that in effect contains executable versions of all the lemmas—implemented using simple structural operations:

\n
\n
\n

\n

And when you run this function, it applies all these lemmas and checks that the result comes out right:

\n
\n
\n

\n

And, yes, this is basically what one would do in a proof assistant system (like Lean or Metamath)—except that here the steps in the proof were generated purely automatically, without any human guidance (or effort). And, by the way, the fact that we can readily translate our symbolic proof representation into a function that we can run provides an operational manifestation of the equivalence between proofs and programs.

\n

But let’s look back at our lemma-interdependence “proof graph”. One notable feature is that we see several nodes with high out-degree—corresponding to what we can think of as “pivotal lemmas” from which many other lemmas end up directly being proved. So here’s a list of the “most pivotal” lemmas in our proof:

\n
\n
\n

\n

Or, more graphically, here are the results for all lemmas that occur:

\n
\n
\n

\n

So what are the “pivotal lemmas”? a · b = b · a we readily recognize as commutativity. But the others—despite their comparative simplicity—don’t seem to correspond to things that have specifically shown up before in the mathematical literature (or, as we’ll discuss later, that’s at least what the current generation of LLMs tell us).

\n

But looking at our proof graph something we can conclude is that a large fraction of the “heavy lifting” needed for the whole proof has already happened by the time we can prove a · b = b · a. So, for the sake of avoiding at least some of hairy detail in the full proof, in most of what follows, we’ll concentrate on the proof of a · b = b · a—which FindEquationalProof tells us we can accomplish in 104 steps, with a proof graph of the form

\n
\n
\n

\n

with the sizes of successive lemmas (in what is basically a breadth-first traversal of the proof graph) being:

\n
\n
\n

\n

The “Machine Code” of the Proof

\n

It’s already obvious from the previous section that the proof as we currently know it is long, complicated, and fiddly—and in many ways reminiscent of something at a “machine-code” level. But to get a grounded sense of what’s going on in the proof, it’s useful to dive into the details—even if, yes, they can be seriously hard to wrap one’s head around.

\n

At a fundamental level, the way the proof—say of a · b = b · a—works is by starting from our axiom, and then progressively deducing new lemmas from pairs of existing lemmas. In the simplest case, that deduction works by straightforward symbolic substitution. So, for example, let’s say we have the lemmas

\n
\n
\n

\n

and

\n
\n
\n

\n

Then it turns out that from these lemmas we can deduce:

\n
\n
\n

\n

Or, in other words, knowing that the first two lemmas hold for any a gives us enough information about · that the third lemma must inevitably also hold. So how do we derive this?

\n

Our lemmas in effect define two-way equivalences: their left-hand sides are defined as equal to their right-hand sides, which means that if we see an expression that (structurally) matches one side of a lemma, we can always replace it by the other side of the lemma. And to implement this, we can write our second lemma explicitly as a rule—where to avoid confusion we’re using x rather than a:

\n
\n
\n

\n

But if we now look at our first lemma, we see that there’s part of it (indicated with a frame) that matches the left-hand side of our rule:

\n
\n
\n

\n

If we replace this part (which is at position {2,2}) using our rule we then get

\n
\n
\n

\n

which is precisely the lemma we wanted to deduce.

\n

We can summarize what happened here as a fragment of our proof graph—in which a “substitution event” node takes our first two lemmas as input, and “outputs” our final lemma:

\n
\n
\n

\n

As always, the symbolic expressions we’re working with here can be represented as trees:

\n
\n
\n

\n

The substitution event then corresponds to a tree rewriting:

\n
\n
\n

\n

The essence of automated theorem proving is to find a particular sequence of substitutions etc. that get us from whatever axioms or lemmas we’re starting with, to whatever lemmas or theorems we want to reach. Or in effect to find a suitable “path” through the multiway graph of all possible substitutions etc. that can be made.

\n

So, for example, in the particular case we’re considering here, this is the graph that represents all possible transformations that can occur through a single substitution event:

\n
\n
\n

\n

The particular transformation (or “path”) we’ve used to prove a · a = a · ((a · a) · a) is highlighted. But as we can see, there are many other possible lemmas that can be generated, or in other words that can be proved from the two lemmas we’ve given as input. Put another way, we can think of our input lemmas as implying or entailing all the other lemmas shown here. And, by analogy to the concept of a light cone in physics, we can view the collection of everything entailed by given lemmas or given events as the (future) “entailment cone” of those lemmas or events. A proof that reaches a particular lemma is then effectively a path in this entailment cone—analogous in physics to a world line that reaches a particular spacetime point.

\n

If we continue building out the entailment cone from our original lemmas, then after two (substitution) events we get:

\n
\n
\n

\n

There are 21 lemmas generated here. But it turns out that beyond the lemma we already discussed there are only three (highlighted here) that appear in the proof we are studying here:

\n
\n
\n

\n

And indeed the main algorithmic challenge of theorem proving is to figure out which lemmas to generate in order to get a path to the theorem one’s trying to prove. And, yes, as we’ll discuss later, there are typically many paths that will work, and different algorithms will yield different paths and therefore different proofs.

\n

But, OK, seeing how new lemmas can be derived from old by substitution is already quite complicated. But actually there’s something even more complicated we need to discuss: deriving lemmas not only by substitution but also by what we’ve called bisubstitution.

\n

We can think of both substitution and bisubstitution as turning one lemma X == Y into a transformation rule (either X Y or Y X), and then applying this rule to another lemma, to derive a new lemma. In ordinary substitution, the left-hand side of the rule directly matches (in a Wolfram Language pattern-matching sense) a subexpression in the lemma we’re transforming. But the key point is that all the variables that appear in both our lemmas are really “pattern variables” (x_ etc. in Wolfram Language). So that means there’s another way that one lemma can transform another, in which in effect replacements are made not only in the lemma being transformed, but also in the lemma that’s doing the transforming.

\n

The net effect, though, is still to take two lemmas and derive another, as in:

\n
\n
\n

\n

But in tracing through the details of our proof, we need to distinguish “substitution events” (shown yellowish) from “bisubstitution” ones (shown reddish). (In FindEquationalProof in Wolfram Language, lemmas produced by ordinary substitution are called “substitution lemmas”, while lemmas produced by bisubstitution are called “critical pair lemmas”.)

\n

OK, so how does bisubstitution work? Let’s look at an example. We’re going to be transforming the lemma

\n
\n
\n

\n

using the lemma (which in this case happens to be our original axiom)

\n
\n
\n

\n

to derive the new lemma:

\n
\n
\n

\n

We start by creating a rule from the second lemma. In this case, the rule we need happens to be reversed relative to the way we wrote the lemma, and this means that (in the canonical form we’re using) it’s convenient to rename the variables that appear:

\n
\n
\n

\n

To do our bisubstitution we’re going to apply this rule to a subterm of our first lemma. We can write that first lemma with explicit pattern variables:

\n
\n
\n

\n

As always, the particular names of those variables don’t matter. And to avoid confusion, we’re going to rename them:

\n
\n
\n

\n

Now look at this subterm of this lemma (which is part {2,1,1,2} of the expression):

\n
\n
\n

\n

It turns out that with appropriate bindings for pattern variables this can be matched (or “unified”) with the left-hand side of our rule. This provides a way to find such bindings:

\n
\n
\n

\n

(Note that in these bindings things like c_ stand only for explicit expressions, like c_, not for expressions that the ordinary Wolfram Language pattern c_ would match.)

\n

Now if we apply the bindings we’ve found to the left-hand side of our rule

\n
\n
\n

\n

and to the subterm we picked out from our lemma

\n
\n
\n

\n

we see that we get the same expression. Which means that with these bindings the subterm matches the left-hand side of our rule, and we can therefore replace this subterm with the right-hand side of the rule. To see all this in operation, we first apply the bindings we’ve found to the lemma we’re going to transform (and, as it happens, the binding for y_ is the only one that matters here):

\n
\n
\n

\n

Now we take this form and apply the rule at the position of the subterm we identified:

\n
\n
\n

\n

Renaming variables

\n
\n
\n

\n

we now finally get exactly the lemma that we were trying to derive:

\n
\n
\n

\n

And, yes, getting here was a pretty complicated process. But with the symbolic character of our lemmas, it’s one that is inevitably possible, and so can be used in our proof. And in the end, out of the 101 lemmas used in the proof, 47 were derived by ordinary substitution, while 54 were derived by bisubstitution.

\n

And indeed the first few steps of the proof turn out to use only bisubstituion. An example is the first step—which effectively applies the original axiom to itself using bisubsitution:

\n
\n
\n

\n

And, yes, even this very first step is pretty difficult to follow.

\n

If we start from the original axiom, there are 16 lemmas that can be derived purely by a single ordinary substitution (effectively of the axiom into itself)—resulting in the following entailment cone:

\n
\n
\n

\n

As it happens, though, none of the 16 new lemmas here actually get used in our proof. On the other hand, in the bisubstitution entailment cone

\n
\n
\n

\n

there are 24 new lemmas, and 4 of them get used in the proof—as we can see from the first level of the proof graph (here rotated for easier rendering):

\n
\n
\n

\n

At the next level of the entailment cone from ordinary substitution, there are 5062 new lemmas—none of which get used in the proof. But of the 31431 new lemmas in the (pure) bisubstitution entailment cone, 13 do get used:

\n
\n
\n

\n

At the next level, lemmas generated by ordinary substitution also start to get used:

\n
\n
\n

\n

Here’s another rendering of these first few levels of the proof graph:

\n
\n
\n

\n

Going to another couple of levels we’re starting to see quite a few independent chains of lemmas developing

\n
\n
\n

\n

which eventually join up when we assemble the whole proof graph:

\n
\n
\n

\n

A notable feature of this proof graph is that it has more bisubstitution events at the top, and more ordinary substitution events at the bottom. So why is that? Essentially it seems to be because bisubstitution events tend to produce larger lemmas, and ordinary substitution events tend to produce smaller ones—as we can see if we plot input and output lemma sizes for all events in the proof:

\n
\n
\n

\n

So in effect what seems to be happening is that the proof first has to “spread out in metamathematical space”, using bisubstitution to generate large lemmas “far out in metamathematical space”. Then later the proof has to “corral things back in”, using ordinary substitution to generate smaller lemmas. And for example, at the very end, it’s a substitution event that yields the final theorem we’re trying to prove:

\n
\n
\n

\n

And earlier in the graph, there’s a similar “collapse” to a small (and rather pivotal) lemma:

\n
\n
\n

\n

As the plot above indicates, ordinary substitution can lead to large lemmas, and indeed bisubstitution can also lead to smaller ones, as in

\n
\n
\n

\n

or slightly more dramatically:

\n
\n
\n

\n

But, OK, so this is some of what’s going on at a “machine-code” level inside the proof we have. Of course, given our axiom and the operations of substitution and bisubstitution there are inevitably a huge number of different possible proofs that could be given. The particular proof we’re considering is what the Wolfram Language FindEquationalProof gives. (In the Appendix, we’ll also look at results from some other automated theorem proving systems. The results will be very comparable, if usually a little lengthier.)

\n

We won’t discuss the detailed (and rather elaborate) algorithms inside FindEquationalProof. But fundamentally what they’re doing is to try constructing certain lemmas, then to find sequences of lemmas that eventually form a “path” to what we’re trying to prove. And as some indication of what’s involved in this, here’s a plot of the number of “candidate lemmas” that are being maintained as possible when different lemmas in the proof are generated:

\n
\n
\n

\n

And, yes, for a while there’s roughly exponential growth, leveling off at just over a million when we get to the “pulling everything together” stage of the proof.

\n

Unrolling the Proof

\n

In what we’ve done so far, we’ve viewed our proof as working by starting from an axiom, then progressively building up lemmas, until eventually we get to the theorem we want. But there’s an alternative view that’s in some ways useful in getting a more direct, “mechanical” intuition about what’s going on in the proof.

\n

Let’s say we’re trying to prove that our axiom implies that p · q = q · p. Well, then there must be some way to start from the expression p · q and just keep on judiciously applying the axiom until eventually we get to the expression q · p. And, yes, the number of axiom application steps required might be very large. But ultimately, if it’s true that the axiom implies p · q = q · p there must be a path that gets from p · q to q · p.

\n

But before considering the case of our full proof, let’s start with something simpler. Let’s assume that we’ve already established the lemmas:

\n
\n
\n

\n

Then we can treat them as axioms, and ask a question like whether they imply the lemma

\n
\n
\n

\n

or, in our current approach, whether they can be used to form a path from a · a to a · (a · (a · a)).

\n

Well, it’s not too hard to see that in fact there is such a path. Apply our second lemma to a · a to get:

\n
\n
\n

\n

But now this subterm

\n
\n
\n

\n

matches the left-hand of the first lemma, so that it can be replaced by the right-hand side of that lemma (i.e. by a · (a · a)), giving in the end the desired a · (a · (a · a)).

\n

So now we can summarize this process as:

\n
\n
\n

\n

In what follows, it’ll be convenient to label lemmas. We’ll call our original axiom A1, we’ll call our successive lemmas generated by ordinary substitution Sn and the ones generated by bisubsitution Bn:

\n
\n
\n

\n

In our proof we’ll also use and to indicate whether we’re going to use the lemma (say X = Y) in the “forward direction” X Y or the “reverse direction” X Y. And with this labeling, the proof we just gave (which is for the lemma S23) becomes:

\n
\n
\n

\n

Each step here is a pure substitution, and requires no replacement in the rule (i.e. “axiom”) being used. But proofs like this can also be done with bisubstitution, where replacements are applied to the rule to get it in a form where it can directly be applied to transform an expression:

\n
\n
\n

\n

OK, so how about the first lemma in our full proof? Here’s a proof that its left-hand side can be transformed to its right-hand side just by judiciously applying the original axiom:

\n
\n
\n

\n

Here’s a corresponding proof for the second lemma:

\n
\n
\n

\n

Both these involve bisubstitution. Here’s a proof of the first lemma derived purely by ordinary substitution:

\n
\n
\n

\n

This proof is using not only the original axiom but also the lemma B5. Meanwhile, B5 can be proved using the original axiom together with B2:

\n
\n
\n

\n

But now, inserting the proof we just gave above for B2, we can give a proof of B5 just in terms of the original axiom:

\n
\n
\n

\n

And recursively continuing this unrolling process, we can then prove S1 purely using the original axiom:

\n
\n
\n

\n

What about the whole proof? Well, at the very end we have:

\n
\n
\n

\n

If we “unroll” one step we have

\n
\n
\n

\n

and after 2 steps:

\n
\n
\n

\n

In principle we could go on with this unrolling, in effect recursively replacing each rule by the sequence of transformations that represents its proof. Typically this process will, however, generate exponentially longer proof sequences. But say for lemma S5

\n
\n
\n

\n

the result is still very easily manageable:

\n
\n
\n

\n

We can summarize this result by in effect plotting the sizes of the intermediate expressions involved—and indicating what part of each expression is replaced at each step (with as above indicating “forward” use of the axiom A1 and “backward” A1 ):

\n
\n
\n

\n

For lemma B33

\n
\n
\n

\n

the unrolled proof is now 30 steps long

\n
\n
\n

\n

while for lemma S11

\n
\n
\n

\n

the unrolled proof is 88 steps long:

\n
\n
\n

\n

But here there is a new subtlety. Doing a direct substitution of the “proof paths” for the lemmas used to prove S11 in our original proof gives a proof of length 104:

\n
\n
\n

\n

But this proof turns out to be repetitive, with the whole gray section going from one copy to another of:

\n
\n
\n

\n

As an example of a larger proof, we can consider lemma B47:

\n
\n
\n

\n

And despite the simplicity of this lemma, our proof for it is 1008 steps long:

\n
\n
\n

\n

If we don’t remove repetitive sections, it’s 6805 steps:

\n
\n
\n

\n

Can we unroll the whole proof of a · b = b · a? We can get closer by considering lemma S36:

\n
\n
\n

\n

Its proof is 27105 steps long:

\n
\n
\n

\n

The distribution of expression sizes follows a roughly exponential distribution, with a maximum of 20107:

\n
\n
\n

\n

Plotting the expression sizes on a log scale one gets:

\n
\n
\n

\n

And what stands out most here is a kind of recursive structure—which is the result of long sequences that basically represent the analog of “subroutine calls” that go back and repeatedly prove lemmas that are needed.

\n

OK, so what about the whole proof of a · b = b · a? Yes, it can be unrolled—in terms of 83,314 applications of the original axiom. The sequence of expression sizes is:

\n
\n
\n

\n

Or on a log scale:

\n
\n
\n

\n

The distribution of expression sizes now shows clear deviation from being exponential:

\n
\n
\n

\n

The maximum is 63245, which occurs just 81 steps after the exact midpoint of the proof. In other words, in the middle, the proof has wandered incredibly far out in metamathematical space (there are altogether CatalanNumber[63245] ≈ 1038070 possible expressions of the size it reaches).

\n

The proof returns to small expressions just a few times; here are all the cases in which the size is below 10:

\n
\n
\n

\n

So, yes, it is possible to completely unroll the proof into a sequence of applications of the original axiom. But if one does this, it inevitably involves repeating lots of work. Being able to use intermediate lemmas in effect lets one “share common subparts” in the proof. So that one ends up with just 104 “rule applications”, rather than 83314. Not that it’s easy to understand those 104 steps…

\n

Is There a Better Notation?

\n

Looking at our proof—either in its original “lemma” form, or in its “unrolled” form—the most striking aspect of it is how complicated (and incomprehensible) it seems to be. But one might wonder whether much of that complexity is just the result of not “using the right notation”. In the end, we’ve got a huge number of expressions written in terms of · operations that we can interpret as Nand (or Nor). And maybe it’s a little like seeing the operation of a microprocessor down at the level of individual gates implementing Nands or Nors. And might there perhaps be an analog of a higher-level representation—with higher-level operations (even like arithmetic) that are more accessible to us humans?

\n

It perhaps doesn’t help that Nand itself is a rather non-human construct. For example, not a single natural human language seems to have a word for Nand. But there are combinations of Nands that have more familiar interpretations:

\n
\n
\n

\n

But what combinations actually occur in our proof? Here are the most common subexpressions that appear in lemmas in the proof:

\n
\n
\n

\n

And, yes, we could give the most common of these special names. But it wouldn’t really help in “compressing” the proof—or making it easier to understand.

\n

What about “upgrading” our “laws of inference”, i.e. the way that we can derive new lemmas from old? Perhaps instead of substitution and bisubstitution, which both take two lemmas and produce one more, we could set up more elaborate “tactics” that for example take in more input lemmas. We’ve seen that if we completely unroll the proof, it gets much longer. So perhaps there is a “higher-order” setup that for example dramatically shortens the proof.

\n

One way one might identify this is by seeing commonly repeating structures in the subgraphs that lead to lemmas. But in fact these subgraphs are quite diverse:

\n
\n
\n

\n\n

A typical feature of human-written mathematical proofs is that they’re “anchored” by famous theorems or lemmas. They may have fiddly technical pieces. But usually there’s a backbone of “theorems people know”.

\n

We have the impression that the proof we’re discussing here “spends most of its time wandering around the wilds of metamathematical space”. But perhaps it visits waypoints that are somehow recognizable, or at least should be. Or in other words, perhaps out in the metamathematical space of lemmas there are ones that are somehow sufficiently popular that they’re worth giving names to, and learning—and can then be used as “reference points” in terms of which our proof becomes simpler and more human accessible.

\n

It’s a story very much like what happens with human language. There are things out there in the world, but when there’s a certain category of them that are somehow common or important enough, we make a word for them in our language, which we can then use to “compactly” refer to them. (It’s again the same story when it comes to computational language, and in particular the Wolfram Language, except that in that case it’s been my personal responsibility to come up with the appropriate definitions and names for functions to represent “common lumps of computation”.)

\n

But, OK, so what are the “popular lemmas” of Nand proofs? One way to explore this is to enumerate statements that are “true about Nand”—then to look at proofs of these statements (say found with FindEquationalProof from our axiom) and see what lemmas show up frequently in them.

\n

Enumerating statements “true about Nand, starting from the smallest, we get

\n
\n
\n

\n

where we have highlighted statements from this list that appear as lemmas in our proof.

\n

Proving each of these statements from our original axiom, here are the lengths of proofs we find (for all 1341 distinct theorems with up to LeafCount 4 on each side):

\n
\n
\n

\n

A histogram shows that it’s basically a bimodal distribution

\n
\n
\n

\n

with the smallest “long-proof” theorem being:

\n
\n
\n

\n

In aggregate, all these proofs use about 200,000 lemmas. But only about 1200 of these are distinct. And we can plot which lemmas are used in which proofs—and we see that there are indeed many lemmas that are used across wide ranges of proofs, while there are a few others that are “special” to each proof (the diagonal stripe is associated with lemmas close to the statement being proved):

\n
\n
\n

\n

If we rank all distinct lemmas from most frequently to least frequently used, we get the following distribution of lemma usage frequencies across all our proofs:

\n
\n
\n

\n

It turns out that there is a “common core” of 49 lemmas that are used in every single one of the proofs. So what are these lemmas? Here’s a plot of the usage frequency of lemmas against their size—with the “common ones” populating the top line:

\n
\n
\n

\n

And at first this might seem surprising. We might have expected that short lemmas would be the most frequent, but instead we’re seeing long lemmas that always appear, the very longest being:

\n
\n
\n

\n

So why is this? Basically it’s that these long lemmas are being used at the beginning of every proof. They’re the result of applying bisubstitution to the original axiom, and in some sense they seem to be laying down a kind of net in metamathematical space that then allows more diverse—and smaller—lemmas to be derived.

\n

But how are these “common core” popular lemmas distributed within proofs? Here are a few examples:

\n
\n
\n

\n

And what we see is that while, yes, the common core lemmas are always at the beginning, they don’t seem to have a uniform way of “plugging into” the rest of the proof. And it doesn’t, for example, seem as if there’s just some small set of (perhaps simple) “waypoint” lemmas that one can introduce that will typically shorten these proofs.

\n

If one effectively allows all the common core lemmas to be used as axioms, then inevitably proofs will be shortened; for example, the proof of a · b = b · a—which only ends up using 5 of the common core lemmas—is now shortened to 51 lemmas:

\n
\n
\n

\n

It doesn’t seem to become easier to understand, though. And if it’s unrolled, it’s still 5013 steps.

\n

Still, one can ask what happens if one just introduces particular “recognizable” lemmas as additional axioms. For example, if we include “commutativity” a · b = b · a then we find that, yes, we do manage to reduce the lengths of some proofs, but certainly not all:

\n
\n
\n

\n

Are there any other “pivotal” lemmas we could add? In particular, what about lemmas that can help with the length-200 or more proofs? It turns out that all of these proofs involve the lemma:

\n
\n
\n

\n

So what happens if we add this? Well, it definitely reduces proof lengths:

\n
\n
\n

\n

And sometimes it even seems like it brings proofs into “human range”. For example, a proof of

\n
\n
\n

\n

from our original axiom has length 56. Adding in commutativity reduces it to length 18. And adding our third lemma reduces it to just length 9—and makes it not even depend directly on the original axiom:

\n
\n
\n

\n

But despite the apparent simplicity here, the steps involved—particularly when bisubstitution is used—are remarkably hard to follow. (Note the use of a = a as a kind of “implicit axiom”—something that has actually also appeared, without comment, in many of our other proofs.)

\n

Can We Get a Shorter Proof?

\n

The proof that we’ve been studying can be seen in some ways as a rather arbitrary artifact. It’s the output of FindEquationalProof, with all its specific detailed internal algorithms and choices. In the Appendix, we’ll see that other automated theorem proving systems give very similar results. But we still might wonder whether actually the complexity of the proof as we’ve been studying it is just a consequence of the details of our automated theorem proving—and that in fact there’s a much shorter (and perhaps easier to understand) proof that exists.

\n

One approach we could take—reminiscent of higher category theory—is to think about just simplifying the proof we have, effectively using proof-to-proof transformations. And, yes, this is technically difficult, though it doesn’t seem impossible. But what if there are “holes” in proof space? Then a “continuous deformation” of one proof into another will get stuck, and even if there is a much shorter proof, we’re liable to get “topologically stuck” before we find it.

\n

One way to be sure we’re getting the shortest proof of a particular lemma is to explicitly find the first place that lemma appears in the (future) entailment cone of our original axiom. For example, as we saw above, a single substitution event leads to the entailment cone:

\n
\n
\n

\n

Every lemma produced here is, by construction, in principle derivable by a proof involving a single substitution event. But if we actually use FindEquationalProof to prove these lemmas, the proofs we get most involve 2 events (and in one case 4):

\n
\n
\n

\n

If we take another step in the entailment cone, we get a total of 5062 lemmas. From the way we generated them, we know that all these lemmas can in principle be reached by proofs of length 2. But if we run FindEquationalProof on them, we find a distribution of proof lengths:

\n
\n
\n

\n

And, yes, there is one lemma (with LeafCount 183) that is found only by a proof of length 15. But most often the proof length is 4—or about double what it could be.

\n

If we generate the entailment cone for lemmas using bisubstitution rather than just ordinary substitution, there are slightly more cases where FindEquationalProof does worse at getting minimal proofs.

\n

For example, the lemma

\n
\n
\n

\n

and 3 others can be generated by a single bisubstitution from the original axiom, but FindEquationalProof gives only proofs of length 4 for all of these.

\n

What about unrolled proofs, in which one can generate an entailment cone by starting from a particular expression, and then applying the original axiom in all possible ways? For example, let’s say we start with:

\n
\n
\n

\n

Then applying bisubstitution with the original axiom once in all possible ways gives:

\n
\n
\n

\n

Applying bisubstitution a second time gives a larger entailment cone:

\n
\n
\n

\n

But now it turns out that—as indicated—one of the expressions in this cone is:

\n
\n
\n

\n

So this shows that the lemma

\n
\n
\n

\n

can in principle be reached with just two steps of “unrolled” proof:

\n
\n
\n

\n

And in this particular case, if we use FindEquationalProof and then unroll the resulting proof we also get a proof of length 3—but it goes through a different intermediate expression:

\n
\n
\n

\n

As it happens, this intermediate expression is also reached in the entailment cone that we get by starting from our “output” expression and then applying two bisubsitutions:

\n
\n
\n

\n

What Actually Is the “·”? Models and the Proof

\n

We can think of logic (or Boolean algebra) as being associated with a certain collection of theorems. And what our axiom does is to provide something from which all theorems of logic (and nothing but theorems of logic) can be derived. At some level, we can think of it as just being about symbolic expressions. But in our effort to understand what’s going on—say with our proof—it’s sometimes useful to ask how we can “concretely” interpret these expressions.

\n

For example, we might ask what the · operator actually is. And what kinds of things can our symbolic variables be? In effect we’re asking for what in model theory are called “models” of our axiom system. And in aligning with logic the most obvious model to discuss is one in which variables can be True or False, and the · represents either the logical operator Nand or the logical operator Nor.

\n

The truth table, say for Nand, is:

\n
\n
\n

\n

And as expected, with this model for ·, we can confirm that our original axiom holds:

\n
\n
\n

\n

In general, though, our original axiom allows two size-2 models (that we can interpret as Nand and Nor):

\n
\n
\n

\n

It allows no size-3 models, and in fact in general allows only models of size 2n; for example, for size 4 its models are:

\n
\n
\n

\n

So what about a · b = b · a? What models does it allow? For size 2, it’s all 8 possible models with symmetric “multiplication tables”:

\n
\n
\n

\n

But the crucial point is that the 2 models for our original axiom system are part of these. In other words, at least for size-2 models, satisfying the original axiom system implies satisfying a · b = b · a.

\n

And indeed any lemma derived from our axiom system must allow the models associated with our original axiom system. But it may also allow more—and sometimes many more. So here’s a map of our proof, showing how many models (out of 16 possible) each lemma allows:

\n
\n
\n

\n

Here are the results for size-3 models:

\n
\n
\n

\n

And, once again, these look complicated. We can think of models as defining—in some sense—what lemmas are “about”. So, for example, our original axiom is “about” Nand and Nor. The lemma a · b = b · a is “about” symmetric functions. And so on. And we might have hoped that we could gain some understanding of our proof by looking at how different lemmas that occur in it “sculpt” what is being talked about. But in fact we just seem to end up with complicated descriptions of sets that don’t seem to have any obvious relationship with each other.

\n

What about a Higher-Level Abstraction?

\n

If there’s one thing that stands out about our proof—and the analysis we’ve given of it here—it’s how fiddly and “in the weeds” it seems to be. But is that because we’re missing some big picture? Is there actually a more abstract way of discussing things, that gets to our result without having to go through all the details?

\n

In the history of mathematics many of the most important themes have been precisely about finding such higher-level abstractions. We could start from the explicit symbolic axioms

\n
\n
\n

\n

or even

\n
\n
\n

\n

and start building up theorems much as we’ve done here. Or we could recognize that these are axioms for group theory, and then start using the abstract ideas of group theory to derive our theorems.

\n

So is there some higher-level version of what we’re discussing here? Remember that the issue is not about the overall structure of Boolean algebra; rather it’s about the more metamathematical question of how one can prove that all of Boolean algebra can be generated from the axiom:

\n
\n
\n

\n

In the last few sections we’ve tried a few semi-empirical approaches to finding higher-level representations. But they haven’t gotten very far. And to get further we’re probably going to need a serious new idea.

\n

And, if history is a guide, we’re going to need to come up with an abstraction that somehow “goes outside of the system” before “coming back”. It’s like trying to figure out the real roots of a cubic equation, and realizing that the best way to do this is to introduce complex numbers, even though the imaginary parts will cancel at the end.

\n

In the direct exploration of our proof, it feels as if the intermediate lemmas we generate “wander off into the wilds of metamathematical space” before coming back to establish our final result. And if we were using a higher-level abstraction, we’d instead be “wandering off” into the space of that abstraction. But what we might hope is that—at least with the concepts we would use in discussing that abstraction—the path that would be involved would be “short enough to be accessible to human understanding”.

\n

Will we be able to find such an abstraction? It’s a subtle question. Because in effect it asks whether we can reduce the computational effort needed for the proof—or, in other words, whether we can find a pocket of computational reducibility in what in general will be a computationally irreducible process. But it’s not a question that can really be answered just for our specific proof on it own. After all, our “abstraction” could in principle just involve introducing a primitive that represents our whole proof or a large part of it. But to make it what we can think of as a real abstraction we need something that spans many different specific examples—and, in our case, likely many axiomatic systems or symbolic proofs.

\n

So is such an abstraction possible? In the history of mathematics the experience has been that after enough time (often measured in centuries) has passed, abstractions tend to be found. But at some level this has been self fulfilling. Because the areas that are considered to have remained “interesting for mathematics” tend to be just those where general abstractions have in fact been found.

\n

In ruliology, though, the typical experience has been different. Because there it’s been routine to sample the computational universe of possible simple programs and encounter computational irreducibility. In the end it’s still inevitable that among the computational irreducibility there must be pockets of computational reducibility. But the issue is that these pockets of computational reducibility may not involve features of our system that we care about.

\n

So is a proof of the kind we’re discussing here more like ruliology, or more like “typical mathematics”? Insofar as it’s a mathematical-style proof of a mathematical statement it feels more like typical mathematics. But insofar as it’s something found by the computational process of automated theorem proving it perhaps seems more ruliology.

\n

But what might a higher-level abstraction for it look like? Figuring that out is probably tantamount to finding the abstraction. But perhaps one can at least expect that in some ways it will be metamathematical, and more about the structure and character of proofs than about their content. Perhaps it will be something related to the framework of higher category theory, or some form of meta-algebra. But as of now, we really don’t know—and we can’t even say that such an abstraction with any degree of generality is possible.

\n

LLMs to the Rescue?

\n

The unexpected success of LLMs in language generation and related tasks has led to the idea that perhaps eventually systems like LLMs will be able to “do everything”—including for example math. We already know—not least thanks to Wolfram Language—that lots of math can be done computationally. But often the computations are hard—and, as in the example of the proof we’re discussing here, incomprehensible to humans. So the question really is: can LLMs “humanize” what has to be done in math, turning everything into a human-accessible narrative? And here our proof seems like an excellent—if challenging—test case.

\n

But what happens if we just ask a current LLM to generate the proof from scratch? It’s not a good picture. Very often the LLM will eagerly generate a proof, but it’ll be completely wrong, often with the same kind of mistakes that a student somewhat out of their depth might make. Here’s a typical response where an LLM simply assumes that the · operator is associative (which it isn’t in Boolean algebra) then produces a proof that on first blush looks at least vaguely plausible, but is in fact completely wrong:

\n

Inadequate LLM proof

\n

Coming up with an explanation for what went wrong is basically an exercise in “LLM psychology”. But in a first approximation one might say the following. LLMs are trained to “fill in what’s typical”, where “typical” is defined by what appears in the training set. But (absent some recent Wolfram Language and Wolfram|Alpha based technology of ours) what’s been available as a training set has been human-generated mathematical texts, where, yes, operators are often associative, and typical proofs are fairly short. And in the “psychology of LLMs” an LLM is much more likely to “do what’s typical” than to “rigorously follow the rules”.

\n

If you press the LLM harder, then it might just “abdicate”, and suggest using the Wolfram Language as a tool to generate the proof. So what happens if we do that, then feed the finished proof to the LLM and ask it to explain? Well, typically it just does what LLMs do so well, and writes an essay:

\n

LLM proof essay

\n

So, yes, it does fine in “generally framing the problem”. But not on the details. And if you press it for details, it’ll typically eventually just start parroting what it was given as input.

\n

How else might we try to get the LLM to help? One thing I’ve certainly wondered is how the lemmas in the proof relate to known theorems—perhaps in quite different areas of mathematics. It’s something one might imagine one would be able to answer by searching the literature of mathematics. But, for example, textual search won’t be sufficient: it has to be some form of semantic search based on the meaning or symbolic structure of lemmas, not their (fairly arbitrary) textual presentation. A vector database might be all one needs, but one can certainly ask an LLM too:

\n

LLM semantic search results

\n

It’s not extremely helpful, though, charmingly, it correctly identifies the source of our original axiom. I’ve tried similar queries for our whole set of lemmas across a variety of LLMs, with a variety of RAG systems. Often the LLM will talk about an interpretation for some lemma—but the lemma isn’t actual present in our proof. But occasionally the LLM will mention possible connections (“band theory”; “left self-distributive operations in quandles”; “Moufang loops”)—though so far none have seemed to quite hit the mark.

\n

And perhaps this failure is itself actually a result—telling us that the lemmas that show up in our proof really are, in effect, out in the wilds of metamathematical space, probing places that haven’t ever been seriously visited before by human mathematics.

\n

But beyond LLMs, what about more general machine learning and neural net approaches? Could we imagine using a neural net as a probe to find “exploitable regularities” in our proof? It’s certainly possible, but I suspect that the systematic algorithmic methods we’ve already discussed for finding optimal notations, popular lemmas, etc. will tend to do better. I suppose it would be one thing if our systematic methods had failed to even find a proof. Then we might have wanted something like neural nets to try to guess the right paths to follow, etc. But as it is, our systematic methods rather efficiently do manage to successfully find a proof.

\n

Of course, there’s still the issue that we’re discussing here that the proof is very “non-human”. And perhaps we could imagine that neural nets, etc.—especially when trained on existing human knowledge—could be used to “form concepts” that would help us humans to understand the proof.

\n

We can get at least a rough analogy for how this might work by looking at visual images produced by a generative AI system trained from billions of human-selected images. There’s a concept (like “a cube”) that exists somewhere in the feature space of possible images. But “around” that concept are other things—“out in interconcept space”—that we don’t (at least yet) explicitly have words for:

\n
\n
Interconcept space
\n

\n

And it’ll presumably be similar for math, though harder to represent in something like a visual way. There’ll be existing math concepts. But these will be embedded in a vast domain of “mathematical interconcept space” that we humans haven’t yet “colonized”. And what we can imagine is that—perhaps with the help of neural nets, etc.—we can identify a limited number of “points in interconcept space” that we can introduce as new concepts that will, for example, provide useful “waypoints” in understanding our proof.

\n

But Why Is the Theorem True?

\n

It’s a common human urge to think that anything that’s true must be true for a reason. But what about our theorem? Why is it true? Well, we’ve seen a proof. But somehow that doesn’t seem satisfactory. We want “an explanation we can understand”. But we know that in general we can’t always expect to get one.

\n

It’s a fundamental implication of computational irreducibility that things can happen where the only way to “see how they happen” is just to “watch them happen”; there’s no way to “compress the explanation”.

\n

Consider the following patterns. They’re all generated by cellular automata. And all live exactly 100 steps before dying out. But why?

\n
\n
\n

\n

In a few cases it seems like we can perhaps at least begin to imagine “narratively describing” a mechanism. But most of the time all we can say is basically that they “live 100 steps because they do”.

\n

It’s a quintessential consequence of computational irreducibility. It might not be what we’d expect, or hope for. But it’s reality in the computational universe. And it seems very likely that our theorem—and its proof—is like this too. The theorem in effect “just happens to be true”—and if you run the steps in the proof (or find the appropriate path in the entailment cone) you’ll find that it is. But there’s no “narrative explanation”. No “understanding of why it’s true”.

\n

Intuition and Automated Theorem Proving

\n

We’ve been talking a lot about the proof of our theorem. But where did the theorem to prove come from in the first place? Its immediate origin was an exhaustive search I did of simple axiom systems, filtering for ones that could conceivably generate Boolean algebra, followed by testing each of the candidates using automated theorem proving.

\n

But how did I even get the idea of searching for a simple axiom system for Boolean algebra? Based on the axiom systems for Boolean algebra known before—and the historical difficulty of finding them—one might have concluded that it was quite hopeless to find an axiom system for Boolean algebra by exhaustive search. But by 2000 I had nearly two decades of experience in exploring the computational universe—and I was well used to the remarkable phenomenon that even very simple computational rules can lead to behavior of great complexity. So the result was that when I came to think about axiom systems and the foundations of mathematics my intuition led me to imagine that perhaps the simplest axiom system for something like Boolean algebra might be simple enough to exhaustively search for.

\n

And indeed discovering the axiom system we’ve discussed here helped further expand and deepen my intuition about the consequences of simple rules. But what about the proof? What intuition might one get from the proof as we now know it, and as we’ve discussed here?

\n

There’s much intuition to be got from observing the world as it is. But for nearly half a century I’ve had another crucial source of intuition: observing the computational universe—and doing computational experiments. I was recently reflecting on how I came to start developing intuition in this way. And what it might mean for intuition I could now develop from things like automated theorem proving and AI.

\n

Back in the mid-1970s my efforts in particle physics led me to start using computers to do not just numerical, but also algebraic computations. In numerical computations it was usual to just get a few numbers out, that perhaps one could plot to make a curve. But in algebraic computations one instead got out formulas—and often very ornate ones full of structure and detail. And for me it was routine to get not just one formula, but many. And looking at these formulas I started to develop intuition about them. What functions would they involve? What algebraic form would they take? What kind of numbers would they involve?

\n

I don’t think I ever consciously realized that I was developing a new kind of computationally based intuition. But I soon began to take it for granted. And when—at the beginning of the 1980s—I started to explore the consequences of simple abstract systems like cellular automata it was natural to expect that I would get intuition from just “seeing” how they behaved. And here there was also another important element. Because part of the reason I concentrated on cellular automata was precisely because one could readily visualize their behavior on a computer.

\n

I don’t think I would have learned much if I’d just been printing out “numerical summaries” of what cellular automata do. But as it was, I was seeing their behavior in full detail. And—surprising though what I saw was—I was soon able to start getting an intuition for what could happen. It wasn’t a matter of knowing what the value of every cell would be. But I started doing things like identifying four general classes of cellular automata, and then recognizing the phenomenon of computational irreducibility.

\n

By the 1990s I was much more broadly exploring the computational universe—always trying to see what could happen there. And in almost all cases it was a story of defining simple rules, then running them, and making an explicit step-by-step visualization of what they do—and thereby in effect “seeing computation in action”.

\n

In recent years—spurred by our Physics Project—I’ve increasingly explored not just computational processes, but also multicomputational ones. And although it’s more difficult I’ve made every effort to visualize the behavior of multiway systems—and to get intuition about what they do.

\n

But what about automated theorem proving? In effect, automated theorem proving is about finding a particular path in a multiway system that leads to a theorem we want. We’re not getting to see “complete behavior”; we’re in effect just seeing one particular “solution” for how to prove a theorem.

\n

And after one’s seen many examples, the challenge once again is to develop intuition. And that’s a large part of what I’ve been trying to do here. It’s crucial, I think, to have some way to visualize what’s happening—in effect because visual input is the most efficient way to get information into our brains. And while the visualizations we’ve developed here aren’t as direct and complete as, say, for cellular automaton evolution, I think they begin to give some overall sense of our proof—and other proofs like it.

\n

In studying simple programs like cellular automata, the intuition I developed led me to things like my classification of cellular automaton behavior, as well as to bigger ideas like the Principle of Computational Equivalence and computational irreducibility. So having now exposed myself to automated theorem proving as I exposed myself to algebraic computation and the running of simple rules in the past, what general principles might I begin to see? And might they, for example, somehow make the fact that our proof works ultimately seem “obvious”?

\n

In some ways yes, but in other ways no. Much as with simple programs, there are axiom systems so simple that, for example, the multiway systems they generate are highly regular. But beyond a low threshold, it’s common to get very complicated—and in many ways seemingly random—multiway system structures. Typically an infinite number of lemmas are generated, with little or no obvious regularity in their forms.

\n

And one can expect that—following the ideas of universal computation—it’ll typically be possible to encode in any one such multiway system the behavior of any other multiway system. In terms of axioms what one’s saying is that if one sets up the right translation between theorems, one will be able to use any one such axiom system to generate the theorems of any other. But the issue is that the translation will often make major changes to the structure of the theorems, and in effect define not just a “mathematical translation” (like between geometry and algebra) but a metamathematical one (as one would need to get from Peano arithmetic to set theory).

\n

And what this means is that it isn’t surprising that even a very simple axiom system can generate a complicated set of possible lemmas. But knowing this doesn’t immediately tell one whether those lemmas will align with some particular existing theory—like Boolean algebra. And in a sense that’s a much more detailed question.

\n

At some metamathematical level it might not be a natural question. But at a “mathematical level” it is. And it’s what we have to address in connection with the theorem—and proof—we’re discussing here. Many aspects of the overall form and properties of the proof will be quite generic, and won’t depend on the particulars of the axiom system we’re using. But some will. And quite what intuition we may be able to get about these isn’t clear. And perhaps it’ll necessarily be fragmented and specific—in effect responding to the presence of computational irreducibility.

\n

It’s perhaps worth commenting that LLMs—and machine learning in general—represent another potential source of intuition. That intuition may well be more about the general features of us as observers and thinkers. But such intuition is potentially critical in framing just what we can experience, not only in the natural world, but also in the mathematical and metamathematical worlds. And perhaps the apparent impotence of LLMs when faced with the proof we’ve been discussing already tells us something significant about the nature of “mathematical observers” like us.

\n

So What Does It Mean for the Future of Mathematics?

\n

Let’s say we never manage to “humanize” the proof we’ve been discussing here. Then in effect we’ll end up with a “black-box theorem”—that we can be sure is true—but we’ll never know quite how or why. So what would that mean for mathematics?

\n

Traditionally, mathematics has tended to operate in a “white box” kind of way, trying to build narrative and understanding along with “facts”. And in this respect it’s very different from natural science. Because in natural science much of our knowledge has traditionally been empirical—derived from observing the world or experimenting on it—and without any certainty that we can “understand its origins”.

\n

Automated theorem proving of the kind we’re discussing here—or, for that matter, pretty much any exploratory computational experimentation—aligns mathematics much more with natural science, deriving what’s true without an expectation of having a narrative explanation of why.

\n

Could one imagine practicing mathematics that way? One’s already to some extent following such a path as soon as one introduces axiom systems to base one’s mathematics on. Where do the axiom systems come from? In the time of Euclid perhaps they were thought of as an idealization of nature. But in more modern times they are realistically much more the result of human choice and human aesthetics.

\n

So let’s say we determine (given a particular axiom system) that some black-box theorem is true. Well, then we can just add it, just as we could another axiom. Maybe one day it’ll be possible to prove P≠NP or the Riemann Hypothesis from existing axioms of mathematics (if they don’t in fact turn out to be independent). And—black box or not—we can expect to add them to what we assume in subsequent mathematics we do, much as they’re routinely added right now, even though their status isn’t yet known.

\n

But it’s one thing to add one or two “black-box theorems”. But what happens when black-box theorems—that we can think of as “experimentally determined”—start to dominate the landscape of mathematics?

\n

Well, then mathematics will take on much more of the character of ruliology—or of an experimental science. When it comes to the applications of mathematics, this probably won’t make much difference, except that in effect mathematics will be able to become much more powerful. But the “inner experience” of mathematics will be quite different—and much less “human”.

\n

If one indeed starts from axioms, it’s not at the outset obvious why everything in mathematics should not be mired in the kind of alien-seeming metamathematical complexity that we’ve encountered in the discussion of our proof here. But what I’ve argued elsewhere is that the fact that in our experience of doing mathematics it’s not is a reflection of how “mathematical observers like us” sample the raw metamathematical structure generated by axioms (or ultimately by the subaxiomatic structure of the ruliad).

\n

The physics analogy I’ve used is that we succeed in doing mathematics at a “fluid dynamics level”, far above the detailed “molecular dynamics level” of things like the proof we’ve discussed here. Yes, we can ask questions—like ones about the structure of our proof—that probe the axiomatic “molecular dynamics level”. But it’s an important fact that in doing what we normally think of as mathematics we almost never have to; there’s a coherent way to operate purely at the “fluid dynamics level”.

\n

Is it useful to “dip down” to the molecular dynamics? Definitely yes, because that’s where we can readily do computations—like those in our proof, or in general those going on in the internals of the Wolfram Language. But a key idea in the design of the Wolfram Language is to provide a computational language that can express concepts at a humanized “fluid dynamics” level—in effect bridging between the way humans can think and understand things, and the way raw computation can be done with them.

\n

And it’s notable that while we’ve had great success over the years in defining “human-accessible” high-level representations for what amount to the “inputs” and “outputs” of computations, that’s been much less true of the “ongoing processes” of computation—or, for example, of the innards of proofs.

\n

Is there a good “human-level” way to represent proofs? If the proofs are short, it’s not too difficult (and the step-by-step solutions technology of Wolfram|Alpha provides a good large-scale example of what can be done). But—as we’ve discussed—computational irreducibility implies that some proofs will inevitably be long.

\n

If they’re not too long, then at least some parts of them might be constructed by human effort, say in a system like a proof assistant. But as soon as there’s much automation (whether with automated theorem proving or with LLMs) it’s basically inevitable that one will end up with things that at least approach what we’ve seen with the proof we’re discussing here.

\n

What can then be done? Well, that’s the challenge. Maybe there is some way to simplify, abstract or otherwise “humanize” the proof we’ve been discussing. But I rather doubt it. I think this is likely one of those cases where we inevitably find ourselves face to face with computational irreducibility.

\n

And, yes, there’s important science (particularly ruliology) to do on the structures we see. But it’s not mathematics as it’s traditionally been practiced. But that’s not to say that the results that come out of things like our proof won’t be useful for mathematics. They will be. But they make mathematics more like an experimental science—where what matters most is in effect the input and output rather than a “publishable” or human-readable derivation in between. And where the key issue in making progress is less in the innards of derivations than in defining clear computational ways to express input and output. Or, in effect, in capturing “human-level mathematics” in the primitives and structure of computational language.

\n

Appendix: What about a Different Theorem Proving System?

\n

The proof we’ve been discussing here was created using FindEquationalProof in the Wolfram Language. But what if we were to use a different automated theorem proving system? How different would the results be? In the spectrum of things that automated theorem proving systems do, our proof here is on the difficult end. And many existing automated theorem proving systems don’t manage to do it all. But some of the stronger ones do. And in the end—despite their different internal algorithms and heuristics—it’s remarkable how similar the results they give are to those from the Wolfram Language FindEquationalProof (differences in the way lemmas vs. inference steps, etc. are identified make detailed quantitative comparisons difficult):

\n
\n
\n

\n

Thanks

\n

Thanks to Nik Murzin of the Wolfram Institute for his extensive help as part of the Wolfram Institute Empirical Metamathematics Project. Also Roger Germundsson, Sergio Sandoval, Adam Strzebonski, Michael Trott, Liubov Tupikina, James Wiles and Carlos Zapata for input. Thanks to Arnim Buch and Thomas Hillenbrand for their work in the 1990s on Waldmeister which is now part of FindEquationalProof (also to Jonathan Gorard for his 2017 work on the interface for FindEquationalProof). I was first seriously introduced to automated theorem proving in the late 1980s by Dana Scott, and have interacted with many people about it over the years, including Richard Assar, Bruno Buchberger, David Hillman, Norm Megill, Todd Rowland and Matthew Szudzik. (I’ve also interacted with many people about proof assistant, proof presentation and proof verification systems, both recently and in the past.)

\n", + "category": "Artificial Intelligence", + "link": "https://writings.stephenwolfram.com/2025/01/who-can-understand-the-proof-a-window-on-formalized-mathematics/", + "creator": "Stephen Wolfram", + "pubDate": "Thu, 09 Jan 2025 22:42:31 +0000", + "enclosure": "", + "enclosureType": "", + "image": "", + "id": "", + "language": "en", + "folder": "", + "feed": "wolfram", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "9bfd307707450e3c090fd2e94aea1158", + "highlights": [] + }, + { + "title": "Useful to the Point of Being Revolutionary: Introducing Wolfram Notebook Assistant", + "description": "\"\"Note: As of today, copies of Wolfram Version 14.1 are being auto-updated to allow subscription access to the capabilities described here. [For additional installation information see here.] Just Say What You Want! Turning Words into Computation Nearly a year and a half ago—just a few months after ChatGPT burst on the scene—we introduced the first […]", + "content": "\"\"\n

\"Useful

\n

Note: As of today, copies of Wolfram Version 14.1 are being auto-updated to allow subscription access to the capabilities described here. [For additional installation information see here.]

\n

Just Say What You Want! Turning Words into Computation

\n

Nearly a year and a half ago—just a few months after ChatGPT burst on the scene—we introduced the first version of our Chat Notebook technology to integrate LLM-based chat into Wolfram Notebooks. For the past year and a half we’ve been building on those foundations. And today I’m excited to be able to announce that we’re releasing the fruits of those efforts: the first version of our Wolfram Notebook Assistant.

\n

There are all sorts of gimmicky AI assistants out there. But Notebook Assistant isn’t one of them. It’s a serious, deep piece of new technology, and what’s more important, it’s really, really useful! In fact, I think it’s so useful as to be revolutionary. Personally, I thought I was a pretty efficient user of Wolfram Language—but Notebook Assistant has immediately made me not only significantly more efficient, but also more ambitious in what I try to do. I hadn’t imagined just how useful Notebook Assistant was going to be. But seeing it now I can say for sure that it’s going to raise the bar for what everyone can do. And perhaps most important of all, it’s going to open up computational language and computational thinking to a vast range of new people, who in the past assumed that those things just weren’t accessible to them.

\n

Leveraging the decades of work we’ve done on the design and implementation of the Wolfram Language (and Wolfram|Alpha), Notebook Assistant lets people just say in their own words what they want to do; then it does its best to crispen it up and give a computational implementation. Sometimes it goes all the way and just delivers the answer. But even when there’s no immediate “answer” it does remarkably well at building up structures where things can be represented computationally and tackled concretely. People really don’t need to know anything about computational language—or computational thinking to get started; Notebook Assistant will take their ideas, rough as they may be, and frame them in computational language terms.

\n

I’ve long seen Wolfram Language as uniquely providing the infrastructure and “notation” to enable “computational X” for all fields X. I’m excited to say that I think Notebook Assistant now bridges “the last mile” to let anyone—at almost any level—access the power of computational language, and “do computational X”. In its original conception, Wolfram Notebook Assistant was just intended to be “useful”. But it’s emerging as something much more than that; something positively revolutionary.

\n

“I can’t believe it’ll do anything useful with that”, I’ll think. But then I’ll try it. And, very often, something amazing will happen. Something that gets me past some sticking point or over some confusion. Something that gives me an unexpected new building block—or new idea—for what I’m trying to do. And that uses the medium of our computational language to take me beyond where I would ever have reached before.

\n

So how does one use Notebook Assistant? Once you’ve signed up you can just go to the toolbar of any notebook, and open a Notebook Assistant chat window:

\n

Notebook Assistant chat window

\n

Now tell Notebook Assistant what you want to do. The more precise and explicit you are, the better. But you don’t have to have thought things through. Just type what comes into your mind. Imagine you’ve been working in a notebook, and (somehow) you’ve got a picture of some cats. You wonder “How can I find the cats in this picture?” Well, just ask Notebook Assistant!

\n

How can I find the cats in this picture?

\n

Notebook Assistant gives some narrative text, and then a piece of Wolfram Language code—which you can just run in your notebook (by pressing Click to enlarge):

\n
\n
\n

\n

It seems a bit like magic. You say something vague, and Notebook Assistant turns it into something precise and computational—which you can then run. It’s not always as straightforward as in this example. But the important thing is that in practice (at least in my rather demanding experience) Notebook Assistant essentially always does spectacularly well at being useful—and at telling me things that move forward what I’m trying to do.

\n

Big or Small, Just Try It!

\n

Imagine that sitting next to you, you had someone very knowledgeable about Wolfram Language and about computational thinking in general. Think what you might ask them. That’s what you should ask Notebook Assistant. And if there’s one thing to communicate here, it’s “Just try it!” You might think what you’re thinking about is too vague, or too specific, or too technical. But just try asking Notebook Assistant. In my experience, you’ll be amazed at what it’s able to do, and how helpful it’s able to be.

\n

Maybe you’re an experienced Wolfram Language user who “knows there must be a way to do something”, but can’t quite remember how. Just ask Notebook Assistant. And not only will it typically be able to find the function (or whatever) you need; it’ll also usually be able to create a code fragment that does the very specific thing you asked about. And, by the way, it’ll save you lots of typing (and debugging) by filling in those fiddly options and so on just how you need them. And even if it doesn’t quite nail it, it’ll have given a skeleton of what you need, that you can then readily edit. (And, yes, the fact that it’s realistic to edit it relies on the fact that Wolfram Language represents it in a way that humans can readily read as well as write.)

\n

What if you’re a novice, who’s never used Wolfram Language before, and never really been exposed to computational thinking, or for that matter, “techie stuff” at all? Well, the remarkable thing is that Notebook Assistant will still be able to help you—a lot. You can ask it something very vague, that doesn’t even seem particularly computational. It does remarkably well at “computationalizing things”. Taking what you’ve said, and finding a way to address it computationally—and to lead you into the kind of computational thinking that’ll be needed for the particular thing you’re trying to do.

\n

In what follows, we’ll see a whole range of different ways to use Notebook Assistant. In fact, even as I’ve been writing this, I’ve discovered quite a few new ways to use it that I’d never thought of before.

\n

There are some general themes, though. The most important is the way Notebook Assistant pivotally relies on the Wolfram Language. In a sense, the main mission of Notebook Assistant is to make things computational. And the whole reason it can so successfully do that is that it has the Wolfram Language as its target. It’s leveraging the unique nature of the Wolfram Language as a full-scale computational language, able to coherently represent abstract and real-world things in a computational way.

\n

One might think that the Wolfram Language would in the end be mainly an “implementation layer”—serving to make what Notebook Assistant produces runnable. But in reality it’s very, very much more than that. In particular, it’s basically the medium—the language—in which computational ideas are communicated. When Notebook Assistant generates Wolfram Language, it’s not just something for the computer to run; it’s also something for humans to read. Yes, Notebook Assistant can produce text, and that’s useful, especially for contextualizing things. But the most concentrated and poignant communication comes in the Wolfram Language it produces. Want the TL;DR? Just look at the Wolfram Language code!

\n

Part of how Wolfram Language code manages to communicate so much so efficiently is that it’s precise. You can just mention the name of a function, and you know precisely what it does. You don’t have to “scaffold” it with text to make its meaning clear.

\n

But there’s something else as well. With its symbolic character—and with all the coverage and consistency that we’ve spent so much effort on over the decades—the Wolfram Language is uniquely able to “communicate in fragments”. Any fragment of Wolfram Language code can be run, and more important, it can smoothly fit into a larger structure. And that means that even small fragments of code that Notebook Assistant generates can be used as building blocks.

\n

It produces Wolfram Language code. You read the code (and it’s critical that it’s set up to be read). You figure out if it’s what you want. (And if it’s not, you edit it, or ask Notebook Assistant to do that.) Then you can use that code as a robust building block in whatever structure—large or small—that you might be building.

\n

In practice, a critical feature is that you don’t have to foresee how Notebook Assistant is going to respond to what you asked. It might nail the whole thing. Or it might just take steps in the right direction. But then you just look at what it produced, and decide what to do next. Maybe in the end you’ll have to “break the problem down” to get Notebook Assistant to deal with it. But there’s no need to do that in advance—and Notebook Assistant will often surprise you by how far it’s able to get on its own.

\n

You might imagine that Notebook Assistant would usually need you to break down what you’re asking into “pure computational questions”. But in effect it has good enough “general knowledge” that it doesn’t. And in fact it will usually do better the more context you give it about why you’re asking it to do something. (Is it for chemical engineering, or for sports analytics, or what?)

\n

But how ambitious can what you ask Notebook Assistant be? What if you ask it something “too big”? Yes, it won’t be able to solve that 100-year-old problem or build a giant software system in its immediate output. But it does remarkably well at identifying pieces that it can say something about, and that can help you understand how to get started. So, as with many things about Notebook Assistant, you shouldn’t assume that it won’t be helpful; just try it and see what happens! And, yes, the more you use Notebook Assistant, the more you’ll learn just what kind of thing it does best, and how to get the most out of it.

\n

So how should you ultimately think about Notebook Assistant? Mainly you should think of it like an very knowledgeable and hardworking expert. But at a more mundane level it can serve as a super-enhanced documentation lookup system or code completion system. It can also take something vague you might ask it, and somehow undauntedly find the “closest formalizable construct”—that it can then compute with.

\n

An important feature is that it is—in human terms—almost infinitely patient and hardworking. Where a human might think: “it’s too much trouble to write out all those details”, Notebook Assistant just goes ahead and does it. And, yes, it saves you huge amounts of typing. But, more important, it makes it “cheap” to do things more perfectly and more completely. So that means you actually end up labeling those plot axes, or adding a comment to your code, or coming up with meaningful names for your variables.

\n

One of the overarching points about Notebook Assistant is that it lowers the barrier to getting help. You don’t have to think carefully about formulating your question. You don’t have to go clicking through lots of links. And you don’t have to worry that it’s too trivial to waste a coworker’s time on the question. You can just ask Notebook Assistant. Oh, and it’ll give you a response immediately. (And you can go back and forth with it, and ask it to clarify and refine things.)

\n

“How Can I Do That?”

\n

At least for me it’s very common: you have something in your mind that you want to do, but you don’t quite know how to achieve it in the Wolfram Language. Well, now you can just ask Notebook Assistant!

\n

I’ll show various examples here. It’s worth emphasizing that these examples typically won’t look exactly the same if you run them again. Notebook Assistant has a certain amount of “AI-style random creativity”—and it also routinely makes use of what you’ve done earlier in a session, etc. It also has to be said that Notebook Assistant will sometimes make mistakes—or will misunderstand what you’re asking it. But if you don’t like what it did, you can always press the button to generate a new response.

\n

Let’s start off with a basic computational operation:

\n
\n
\n

\"Click

\n

As an experienced user of Wolfram Language, a simple “do it with FoldList” would already have been enough. But Notebook Assistant goes all the way—generating specific code for exactly what I asked. Courtesy of Wolfram Language, the code is very short and easy to read. But Notebook Assistant does something else for one as well: it produces an example of the code in action—which lets one check that it really does what one wanted. Oh, and then it goes even further, and tells me about a function in the Wolfram Function Repository (that I, for one, had never heard of; wait did I write it?) that directly does the operation I want.

\n

OK, so that was a basic computational operation. Now let’s try something a little more elaborate:

\n
\n
\n

\n

\"Click

\n

This involves several steps, but Notebook Assistant nails it, giving a nice example. (And, yes, it’s reading the Wolfram Language documentation, so often its examples are based on that.)

\n

But even after giving an A+ result right at the top, Notebook Assistant goes on, talking about various options and extensions. And despite being (I think) quite an expert on what the Wolfram Language can do, I was frankly surprised by what it came up with; I didn’t know about these capabilities!

\n
\n
\n

\n
\n
\n

\n

There’s an incredible amount of functionality built into the Wolfram Language (yes, four decades worth of it). And quite often things you want to do can be done with just a single Wolfram Language function. But which one? One of the great things about Notebook Assistant is that it’s very good at taking “raw thoughts”, sloppily worded, and figuring out what function you need. Like here, bam, “use LineGraph!”

\n
\n
\n

\n

\"Click

\n

You can ask Notebook Assistant “fairly basic” questions, and it’ll respond with nice, synthesized-on-the-spot “custom documentation”:

\n
\n
\n

\n

\"Click

\n

You can also ask it about obscure and technical things; it knows about every Wolfram Language function, with all its details and options:

\n
\n
\n

\n

\"Click

\n
\n
\n

\n

\"Click

\n
\n
\n

\n

\"Click

\n

Notebook Assistant is surprisingly good at writing quite minimal code that does sophisticated things:

\n
\n
\n

\n

\"Click

\n

If you ask it open-ended questions, it’ll often answer with what amount to custom-synthesized computational essays:

\n
\n
\n

\n

\"Click

\n

Notebook Assistant is pretty good at “pedagogically explaining what you can do”:

\n
\n
\n

\n

\"Click

\n

In everything we’ve seen so far, the workflow is that you ask Notebook Assistant something, then it generates a result, and then you use it. But everything can be much more interactive, and you can go back and forth with Notebook Assistant—say refining what you want it to do.

\n

Here I had something in mind, but I was quite sloppy in describing it. And although Notebook Assistant came up with a reasonable interpretation of what I asked, it wasn’t really what I had in mind:

\n
\n
\n

\n

\"Click

\n

So I went back and edited what I asked (right there in the Notebook Assistant window), and tried again:

\n
\n
\n

\n

\"Click

\n

The result was better, but still not right. But all I had to do was to tell it to make a change, and lo and behold, I got what I was thinking of:

\n
\n
\n

\n

\"Click

\n

By the way, you can also perfectly well ask about deployment to the web:

\n
\n
\n

\n

\"Click

\n

And while I might have some minor quibbles (why use a string for the molecule name, not \"Chemical\"; why not use CloudPublish; etc.) what Notebook Assistant produces works, and provides an excellent scaffold for further development. And, as it often does, Notebook Assistant adds a kind of “by the way, did you know?” at the end, showing how one could use ARPublish to produce output for augmented reality.

\n

Here’s one last example: creating a user interface element. I want to make a slider-like control that goes around (like an analog clock):

\n
\n
\n

\n

\"Click

\n

Well, actually, I had in mind something more minimal:

\n
\n
\n

\n

\"Click

\n

Impressive. Even if maybe it got that from some documentation or other example. But what if I wanted to tweak it? Well, actually, Notebook Assistant does seem to understand what it has:

\n
\n
\n

\n

\"Click

\n

“Can You Just Do That for Me?”

\n

What we’ve seen so far are a few examples of asking Notebook Assistant to tell us how to do things. But you can also just ask Notebook Assistant to do things for you, in effect producing “finished goods”:

\n
\n
\n

\n

\"Click

\n

Pretty impressive! And it even just went ahead and made the picture. By the way, if I wanted the code packaged up into a single line, I can just ask for that:

\n
\n
\n

\n

\"Click

\n

Notebook Assistant can generate interactive content too. And—very usefully—you don’t have to give precise specifications up front: Notebook Assistant will automatically pick “sensible defaults” (that, yes, you can trivially edit later, or just tell Notebook Assistant to change it for you):

\n
\n
\n

\n

\"Click

\n

Here’s an example that requires putting together several different ideas and functions. But Notebook Assistant manages it just fine—and in fact the code it produces is interesting and clarifying to read:

\n
\n
\n

\n

\"Click

\n

Notebook Assistant knows about every area of Wolfram Language functionality—here synthetic geometry:

\n
\n
\n

\n

\"Click

\n

And here chemistry:

\n
\n
\n

\n

\"Click

\n

It also knows about things like the Wolfram Function Repository, here running a function from there that generates a video:

\n
\n
\n

\n

\"Click

\n

Here’s something that again leverages Notebook Assistant’s encyclopedic knowledge of Wolfram Language capabilities, now pulling in real-time data:

\n
\n
\n

\n

\"Click

\n

I can’t resist trying a few more examples:

\n
\n
\n

\n

\"Click

\n

Let’s try something involving more sophisticated math:

\n
\n
\n

\n

\"Click

\n

(I would have used RegularPolygon[5], and I don’t think DiscretizeRegion is necessary … but what Notebook Assistant did is still very impressive.)

\n

Or here’s some more abstract math:

\n
\n
\n

\n

\"Click

\n

OK, so Notebook Assistant provides a very powerful way to go from words to computational results. So what then is the role of computational language and of “raw Wolfram Language”? First of all, it’s the Wolfram Language that makes everything we’ve seen here work; it’s what the words are being turned into so that they can be computed from. But there’s something much more than that. The Wolfram Language isn’t just for computers to compute with. It’s also for humans to think with. And it’s an incredibly powerful medium for that thinking. Like a great generalization of mathematical notation from the distant past, it provides a streamlined way to broadly formalize things in computational terms—and to systematically build things up.

\n

Notebook Assistant is great for getting started with things, and for producing a first level of results. But words aren’t ultimately an efficient way say how to build up from there. You need the crisp, formal structure of computational language. In which even the tiny amounts of code you write can be incredibly powerful.

\n

Now that I’ve been using Notebook Assistant for a while I think I can say that on quite a few occasions it’s helped me launch things, it’s helped me figure out details, and it’s helped me debug things that have gone wrong. But the backbone of my computational progress has been me writing Wolfram Language myself (though quite often starting from something Notebook Assistant wrote). Notebook Assistant is an important new part of the “on ramp” to Wolfram Language; but it’s raw Wolfram Language that lets one really zoom forward to build new structures and achieve what’s computationally possible.

\n

“Where Do I Start?”

\n

Computational thinking is an incredibly powerful approach. But sometimes it’s hard to get started with, particularly if you’re not used to it. And although one might not imagine it, Notebook Assistant can be very useful here, essentially helping one brainstorm about what direction to take.

\n

I was explaining this to our head of Sales, and tried:

\n
\n
\n

\n

\"Click

\n

I really didn’t expect this to do anything terribly useful … and I was frankly amazed at what happened. Pushing my luck I tried:

\n
\n
\n

\n

\"Click

\n

Obviously this isn’t the end of the story, but it’s a remarkably good beginning—going from a vague request to something that’s set up to be thought about computationally.

\n

Here’s another example. I’m trying to invent a good system for finding books in my library. I just took a picture of a shelf of books behind my desk:

\n
\n
\n

\n

\"Click

\n

Once again, a very impressive result. Not the final answer, but a surprisingly good start. That points me in the direction of image processing and segmentation. At first, it’s running too slowly, so it downsamples the image. Then it tells me I might need to tweak the parameters. So I just ask it to create a tool to do that:

\n
\n
\n

\n

\"Click

\n

And then:

\n
\n
\n

\n

\"Click

\n

It’s very impressive how much Notebook Assistant can help one go “from zero to computation”. And when one gets used to using it, it starts to be quite natural to just try it on all sorts of things one’s thinking about. But if it’s just “quick, tell me something to compute”, it’s usually harder to come up with anything.

\n

And that reminds me of the very first time I ever saw a computer in real life. It was 1969 and I was 9 years old (and the computer was an IBM mainframe). The person who was showing me the computer asked me: “So what do you want to compute?” I really had no idea at that time “what one might compute”. Rather lamely I said “the weight of a dinosaur”. So, 55 years later, let’s try that again:

\n
\n
\n

\n

\"Click

\n

And let’s try going further:

\n
\n
\n

\n

\"Click

\n

“Tweak the Details for Me”

\n

Something I find very useful with Notebook Assistant is having it “tweak the details” of something I’ve already generated. For example, let’s say I have a basic plot of a sine curve in a notebook:

\n

\"Click

\n

Assuming I have that notebook in focus, Notebook Assistant will “see” what’s there. So then I can tell it to modify my sine curve—and what it will do is produce new code with extra details added:

\n

\"Click

\n

That’s a good result. But as a Wolfram Language aficionado I notice that the code is a bit more complicated than it needs to be. So what can I do about it? Well, I can just ask Notebook Assistant to simplify it:

\n

\"Click

\n

I can keep going, asking it to further “embellish” the plot:

\n

\"Click

\n

Let’s push our luck and try going even further:

\n

\"Click

\n

Oops. Something went wrong. No callouts, and a pink “error” box. I tried regenerating a few times. Often that helps. But this time it didn’t seem to. So I decided to give Notebook Assistant a suggestion:

\n

\"Click

\n

And now it basically got it. And with a little more back and forth I can expect to get exactly what I want.

\n

In the Wolfram Language, functions (like Plot) are set up to have good automatic defaults. But when you want, for example, to achieve some particular, detailed look, you often have to end up specifying all sorts of additional settings. And Notebook Assistant is very good at doing this, and in effect, patiently typing out all those option settings, etc.

\n

“What Went Wrong? Fix It!”

\n

Let’s say you wrote some Wolfram Language (or perhaps Notebook Assistant did it for you). And let’s say it doesn’t work. Maybe it just produces the wrong output. Or maybe it generates all sorts of messages when it runs. Either way, you can just ask the Assistant “What went wrong?”

\n

\"Click

\n

Here the Assistant rather patiently and clearly explained the message that was generated, then suggested “correct code”:

\n

\"Click

\n

The Assistant tends to be remarkably helpful in situations like this—even for an experienced Wolfram Language user like me. In a sense, though, it has an “unfair advantage”. Not only has it learned “what’s reasonable” from seeing large amounts of Wolfram Language code; it also has access to “internal information”—like a stream of telemetry about messages that were generated (as well as stack traces, etc.).

\n

In general, Notebook Assistant is rather impressive at “spotting errors” even in long and sophisticated pieces of Wolfram Language code—and in suggesting possible fixes. And I can say that this is a way in which using Notebook Assistant has immediately saved me significant time in doing things with Wolfram Language.

\n

“Improve My Code”

\n

Notebook Assistant doesn’t just know how to write Wolfram Language code; it knows how to write good Wolfram Language code. And in fact if you give it even a sloppy “outline” of Wolfram Language code, the Assistant is usually quite good at making it clean and complete. And that’s important not only in being able to produce code that will run correctly; it’s also important in making code that’s clear enough that you can understand it (courtesy of the readability of good Wolfram Language code).

\n

Here’s an example starting with a rather horrible piece of Wolfram Language code on the right:

\n

\"Click

\n

The code on the right is quite buggy (it doesn’t initialize list, for example). But Notebook Assistant guesses what it’s supposed to do, and then makes nice “Wolfram style” versions, explaining what it’s doing.

\n

If the code you’re dealing with is long and complicated, Notebook Assistant may (like a person) get confused. But you can always select a particular part, then ask Notebook Assistant specifically about that. And the symbolic nature—and coherence—of the Wolfram Language will typically mean that Notebook Assistant will be able to act “modularly” on the piece that you’ve selected.

\n

Something I’ve found rather useful is to have Notebook Assistant refactor code for me. Here I’m starting from a sequence of separate inputs (yes, itself generated by Notebook Assistant) and I’m turning it into a single function:

\n

\"Click

\n

Now we can use the function however we want:

\n

\"Click

\n

Going the other way is useful too. And Notebook Assistant is surprisingly good at grokking what a piece of code is “about”, and coming up with reasonable names for variables, functions, etc.:

\n

\"Click

\n

Yet another thing Notebook Assistant is good at is knowing all sorts of tricks to make code run faster:

\n

\"Click

\n

“Explain That to Me”

\n

“What does that piece of code actually do?” Good Wolfram Language code—like good prose or good mathematical formalism—can succinctly communicate ideas, in its case in computational terms, precisely grounded in the definition of the language. But (as with prose and math) you sometimes need a more detailed exploration. And providing narrative explanations of code is something else that Notebook Assistant is good at. Here it’s taking a single line of (rather elegant) Wolfram Language code and writing a whole essay about what the code is doing:

\n

\"Click

\n

What if you have a long piece of code, and you just want to explain some small part of it? Well, since Notebook Assistant sees selections you make, you can just select one part of your code, and Notebook Assistant will know that’s what you want to explain.

\n

“Fill in the Paperwork for Me”

\n

The Wolfram Language is carefully designed to have built-in functions that just “do what you need”, without having to use idioms or set up repeated boilerplate. But there are situations where there’s inevitably a certain amount of “bureaucracy” to do. For example, let’s say you’re writing a function to deploy to the Function Repository. You enter the definition for the function into a Function Resource Definition Notebook. But now you have to fill in documentation, examples, etc. And in fact that’s often the part that typically takes the longest. But now you can ask Notebook Assistant to do it for you. Here I put the cursor in the Examples section:

\n

\"Click

\n

It’s always a good idea to set up tests for functions you define. And this is another thing Notebook Assistant can help with:

\n

\"Click

\n

The “Inspiration Button”

\n

All the examples of interacting with Notebook Assistant that we’ve seen so far involve using the Notebook Assistant window, that you can open with the button on the notebook toolbar. But another method involves using the button in the toolbar, which we’ve been calling the “inspiration button”.

\n

When you use the Notebook Assistant window, the Assistant will always try to figure out what you’re talking about. For example, if you say “Plot that” it’ll use what it knows about what notebook you’re using, and where you are in it, to try to work out what you mean by “that”. But when you use the button it’ll specifically try to “provide inspiration at your current selection”.

\n

Let’s say you’ve typed Plot[Sin[x]. Press and it’ll suggest a possible completion:

\n

\"Click

\n

After using that suggestion, you can keep going:

\n

\"Click

\n

You can think of the button as providing a sophisticated meaning-aware autocomplete.

\n

\"Click

\n

It also lets you do things like code simplification. Imagine you’ve written the (rather grotesque):

\n
\n
\n

\n

If you want to get rid of the For loops, just select them and press the button to get a much simpler version:

\n

\"Click

\n

Want to go even further? Select that result and Notebook Assistant manages to get to a one-liner:

\n

\"Click

\n

Magic Writing, Magic Coding

\n

At some level it seems bizarre. Write a text cell that describes code to follow it. Start an Input cell, then press and Notebook Assistant will try to magically write the code!

\n

\n

You can go the other way as well. Start with the code, then start a CodeText cell above it, and it’ll “magically” write a caption:

\n

\n

If you start a heading cell, it’ll try to make up a heading:

\n

\n

Start a Text cell, and it’ll try to “magically” write relevant textual content:

\n

\n

You can go even further: just put the cursor underneath the existing content, and press —and Notebook Assistant will start suggesting how you can go on:

\n

\n

As I write this, of course I had to try it: what does Notebook Assistant think I should write next? Here’s what it suggests (and, yes, in this case, those aren’t such bad ideas):

\n

\"Click

\n

The Practicalities of the Assistant

\n

One of the objectives for Notebook Assistant is to have it provide “hassle-free” access to AI and LLM technology integrated into the Wolfram System. And indeed, once you’ve set up your subscription (within your Wolfram Account), everything “just works”. Under the hood, there’s all sorts of technology, servers, etc. But you don’t have to worry about any of that; you can just use Notebook Assistant as a built-in part of the Wolfram Notebook experience.

\n

As you work with Notebook Assistant, you’ll get progressively better intuition about where it can best help you. (And, yes, we’ll be continually updating Notebook Assistant, so it’ll often be worth trying things again if a bit of time has passed.) Notebook Assistant—like any AI-based system—has definite human-like characteristics, including sometimes making mistakes. Often those mistakes will be obvious (e.g. code with incorrect syntax colored red); sometimes they may be more difficult to spot. But the great thing about Notebook Assistant is that it’s firmly anchored to the “solid ground” of Wolfram Language. And any time it writes Wolfram Language code that you can see does what you want, you can always confidently use it.

\n

There are some things that will help Notebook Assistant do its best for you. Particularly important is giving it the best view of the “context” for what you ask it. Notebook Assistant will generally look at whatever has already been said in a particular chat. So if you’re going to change the subject, it’s best to use the button to start a new chat, so Notebook Assistant will focus on the new subject, and not get confused by what you (or it) said before.

\n

When you open the Notebook Assistant chat window you’ll often want to talk about—or refer to—material in some other notebook. Generally Notebook Assistant will assume that the notebook you last used is the one that’s relevant—and that any selection you have in that notebook is the thing to concentrate on the most. If you want Notebook Assistant to focus exclusively on what you’re saying in the chat window, one way to achieve that is to start a blank notebook. Another approach is to use the menu, which provides more detailed control over what material Notebook Assistant will consider. (For now, it just deals with notebooks you have open—but external files, URLs, etc. are coming soon.)

\n

Notebook Assistant will by default store all your chat sessions. You can see your chat history (with chats automatically assigned names by the Assistant) by pressing the History button. You can delete chats from your history here. You can also “pop out” chats with , creating standalone notebooks that you can save, send to other people, etc.

\n

So what’s inside Notebook Assistant? It’s quite a tower of technology. The core of its “linguistic interface” is an LLM (actually, several different LLMs)—trained on extensive Wolfram Language material, and with access to a variety of tools, especially Wolfram Language evaluators. Also critical to Notebook Assistant is its access to a variety of RAGs based on vector databases, that it uses for immediate semantic search of material such as Wolfram Language documentation. Oh, and then there’s a lot of technology to connect Notebook Assistant to the symbolic internal structure of notebooks, etc.

\n

So when you use Notebook Assistant, where is it actually running? Its larger LLM tasks are currently running on cloud servers. But a substantial part of its functionally is running right on your computer—using Wolfram Language (notably the Wolfram machine learning framework, vector database system, etc.) And because these things are running locally, the Assistant can request access to local information on your computer—as well as avoiding the latency of accessing cloud-based systems.

\n

Chats in Your Main Notebook (Coming Soon)

\n

Much of the time, you want your interactions with Notebook Assistant to be somehow “off on the side”—say in the Notebook Assistant window, or in the inspiration button menu. But sometimes you want your interactions to be right in your main notebook.

\n

And for this you’ll soon (in Version 14.2) be able to use an enhanced version of the Chat Notebook technology that we developed last year, not just in a separate “Chat Notebook”, but fully integrated into any notebook.

\n

At the beginning of a cell in any notebook, just press . You get a chat cell that communicates with Notebook Assistant:

\n

Chat cell

\n

And now the output from that chat cell is placed directly below in the notebook—so you can create a notebook that mixes standard notebook content with chat content.

\n

It all works basically just like a fully integrated version of our Chat Notebook technology. (And this functionality is already available in Version 14.1 if you explicitly create a chat notebook with File > New > Chat Notebook.) As in Chat Notebooks, you use a chat break (with ~) to start a new chat within the same notebook. (In general, when you use a chat cell in an ordinary notebook to access Notebook Assistant, the assistant will see only material that occurs before the chat, and within the same chat block.)

\n

Also Introducing: LLM Kit

\n

In mid-2023 we introduced LLMFunction, LLMSynthesize and related functions (as well as ChatEvaluate, ImageSynthesize, etc.) to let you access LLM functionality directly within the Wolfram Language. Until now these functions required connection to an external LLM provider. But along with Notebook Assistant we’re introducing today LLM Kit—which allows you to access all LLM functionality in the Wolfram Language directly through a subscription within your Wolfram Account.

\n

It’s all very easy: as soon as you enable your subscription, not only Notebook Assistant but also all LLM functionality will just work, going through our LLM service. (And, yes, Notebook Assistant is basically built on top of LLM Kit and the LLM service access it defines.)

\n

When you’ve enabled your Notebook Assistant + LLM Kit subscription, this is what you’ll see in the Preferences panel:

\n

Notebook Assistant + LLM Kit Preferences panel

\n

Our LLM service is primarily aimed at “human speed” LLM usage, in other words, things like responding to what you ask the Notebook Assistant. But the service also seamlessly supports programmatic things like LLMFunction. And for anything beyond small-scale uses of LLMFunction, etc. you’ll probably want to upgrade from the basic “Essentials” subscription level to the “Pro” level. And if you want to go “industrial scale” in your LLM usage, you can do that by explicitly purchasing Wolfram Service Credits.

\n

Everything is set up to be easy if you use our Wolfram LLM service—and that’s what Notebook Assistant is based on. But for Chat Notebooks and programmatic LLM functionality, our Wolfram Language framework also supports connection to a wide range of external LLM service providers. You have to have your own external subscription to whatever external service you want to use. But once you have the appropriate access key you’ll be able to set things up so that you can pick that LLM provider interactively in Chat Notebooks, programmatically through LLMConfiguration, or in the Preferences panel.

\n

(By the way, we’re continually monitoring the performance of different LLMs on Wolfram Language generation; you can see weekly benchmark results at the Wolfram LLM Benchmark Project website—or get the data behind that from the Wolfram Data Repository.)

\n

Opening Up the Ability to “Go Computational”

\n

There’s really never been anything quite like it before: a way of automatically taking what can be quite vague human thoughts and ideas, and making them crisp and structured—by expressing them computationally. And, yes, this is made possible now by the unexpectedly effective linguistic interface that LLMs give us. But ultimately what makes it possible is that the LLMs have a target: the Wolfram Language in all its breadth and depth.

\n

For me it’s an exciting moment. Because it’s a moment where everything we’ve been building these past four decades is suddenly much more broadly accessible. Expert users of Wolfram Language will be able to make use of all sorts of amazing nooks of functionality they never knew about. And people who’ve never used Wolfram Language before—or never even formulated anything computationally—will suddenly be able to do so.

\n

And it’s remarkable what kinds of things one can “make computational”. Let’s say you ask Wolfram Notebook Assistant to make up a story. Like pretty much anything today with LLMs inside, it’ll dutifully do that:

\n

\"Click

\n

But how can one make something like this computational? Well, just ask Notebook Assistant:

\n

\"Click

\n

\"Click

\n

And what it does is rather remarkable: it uses Wolfram Language to create an interactive agent-based computational game version of the story!

\n

Computation is the great paradigm of our times. And the development of “computational X” for all X seems destined to be the future of pretty much every field. The whole tower of ideas and technology that is the modern Wolfram Language was built precisely to provide the computational language that is needed. But now Notebook Assistant is dramatically broadening access to that—making it possible to get “computational language superpowers” using just ordinary (and perhaps even vague) natural language.

\n

And even though I’ve now been living the computational language story for more than four decades Notebook Assistant keeps on surprising me with what it manages to make computational. It’s incredibly powerful to be able “go computational”. And even if you can’t imagine how it could work in what you’re doing, you should still just try it! Notebook Assistant may well surprise you—and in that moment show you a path to leverage the great power of the computational paradigm in ways that you’re never imagined.

\n

\n

\n\n\n

\n", + "category": "Artificial Intelligence", + "link": "https://writings.stephenwolfram.com/2024/12/useful-to-the-point-of-being-revolutionary-introducing-wolfram-notebook-assistant/", + "creator": "Stephen Wolfram", + "pubDate": "Mon, 09 Dec 2024 18:38:15 +0000", + "enclosure": "https://content.wolfram.com/sites/43/2024/12/magic1input.mp4", + "enclosureType": "video/mp4", + "image": "https://content.wolfram.com/sites/43/2024/12/magic1input.mp4", + "id": "", + "language": "en", + "folder": "", + "feed": "wolfram", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "b240979ab8f3c96b9a3a98f6eedd735f", + "highlights": [] + }, + { + "title": "Foundations of Biological Evolution: More Results & More Surprises", + "description": "\"\"This is a follow-on to Why Does Biological Evolution Work? A Minimal Model for Biological Evolution and Other Adaptive Processes [May 3, 2024]. Even More from an Extremely Simple Model A few months ago I introduced an extremely simple “adaptive cellular automaton” model that seems to do remarkably well at capturing the essence of what’s […]", + "content": "\"\"\n

\"Foundations

\n

This is a follow-on to Why Does Biological Evolution Work? A Minimal Model for Biological Evolution and Other Adaptive Processes [May 3, 2024].

\n

Even More from an Extremely Simple Model

\n

A few months ago I introduced an extremely simple “adaptive cellular automaton” model that seems to do remarkably well at capturing the essence of what’s happening in biological evolution. But over the past few months I’ve come to realize that the model is actually even richer and deeper than I’d imagined. And here I’m going to describe some of what I’ve now figured out about the model—and about the often-surprising things it implies for the foundations of biological evolution.

\n

The starting point for the model is to view biological systems in abstract computational terms. We think of an organism as having a genotype that’s represented by a program, that’s then run to produce its phenotype. So, for example, the cellular automaton rules on the left correspond to a genotype which are then run to produce the phenotype on the right (starting from a “seed” of a single red cell):

\n
\n
\n

\n

\n

The key idea in our model is to adaptively evolve the genotype rules—say by making single “point mutations” to the list of outcomes from the rules:

\n
\n
\n

\n

At each step in the adaptive evolution we “accept” a mutation if it leads to a phenotype that has a higher—or at least equal—fitness relative to what we had before. So, for example, taking our fitness function to be the height (i.e. lifetime) of the phenotype pattern (with patterns that are infinite being assigned zero fitness), a sequence of (randomly chosen) adaptive evolution steps that go from the null rule to the rule above might be:

\n
\n
\n

\n

What if we make a different sequence of randomly chosen adaptive evolution steps? Here are a few examples of what happens—each in a sense “using a different idea” for how to achieve high fitness:

\n
\n
\n

\n

And, yes, one can’t help but be struck by how “lifelike” this all looks—both in the complexity of these patterns, and in their diversity. But what is ultimately responsible for what we’re seeing? It’s long been a core question about biological evolution. Are the forms it produces the result of careful “sculpting” by the environment (and by the fitness functions it implies)—or are their most important features somehow instead a consequence of something more intrinsic and fundamental that doesn’t depend on details of fitness functions?

\n

Well, let’s say we pick a different fitness function—for example, not the height of a phenotype pattern, but instead its width (or, more specifically, the width of its bounding box). Here are some results of adaptive evolution in this case:

\n
\n
\n

\n

And, yes, the patterns we get are now ones that achieve larger “bounding box width”. But somehow there’s still a remarkable similarity to what we saw with a rather different fitness function above. And, for example, in both cases, high fitness, it seems, is normally achieved in a complicated and hard-to-understand way. (The last pattern is a bit of an exception; as can also happen in biology, this is a case where for once there’s a “mechanism” in evidence that we can understand.)

\n

So what in the end is going on? As I discussed when I introduced the model a few months ago, it seems that the “dominant force” is not selection according to fitness functions, but instead the fundamental computational phenomenon of computational irreducibility. And what we’ll find here is that in fact what we see is, more than anything, the result of an interplay between the computational irreducibility of the process by which our phenotypes develop, and the computational boundedness of typical forms of fitness functions.

\n

The importance of such an interplay is something that’s very much come into focus as a result of our Physics Project. And indeed it now seems that the foundations of both physics and mathematics are—more than anything—reflections of this interplay. And now it seems that’s true of biological evolution as well.

\n

In studying our model, there are many detailed phenomena we’ll encounter—most of which seem to have surprisingly direct analogs in actual biological evolution. For example, here’s what happens if we plot the behavior of the fitness function for our first example above over the course of the adaptive evolution process:

\n
\n
\n

\n

We see a sequence of “plateaus”, punctuated by jumps in fitness that reflect some “breakthrough” being made. In the picture, each red dot represents the fitness associated with a genotype that was tried. Many fall below the line of “best results so far”. But there are also plenty of red dots that lie right on the line. And these correspond to genotypes that yield the same fitness that’s already been achieved. But here—as in actual biological evolution—it’s important that there can be “fitness-neutral evolution”, where genotypes change, but the fitness does not. Usually such changes of genotype yield not just the same fitness, but also the exact same phenotype. Sometimes, however, there can be multiple phenotypes with the same fitness—and indeed this happens at one stage in the example here

\n
\n
\n

\n

and at multiple stages in the second example we showed above:

\n
\n
\n

\n

The Multiway Graph of All Possible Evolutions

\n

In the previous section we saw examples of the results of a few particular random sequences of mutations. But what if we were to look at all possible sequences of mutations? As I discussed when I introduced the model, it’s possible to construct a multiway graph that represents all possible mutation paths. Here’s what one gets for symmetric k = 2, r = 2 rules—starting from the null rule, and using height as a fitness function:

\n
\n
\n

\n

The way this graph is constructed, there are arrows from a given phenotype to all phenotypes with larger (finite) height that can be reached by a single mutation.

\n

But what if our fitness function is width rather than height? Well, then we get a different multiway graph in which arrows go to phenotypes not with larger height but instead with larger width:

\n
\n
\n

\n

So what’s really going on here? Ultimately one can think of there being an underlying graph (that one might call the “mutation graph”) in which every edge represents a transformation between two phenotypes that can be achieved by a single mutation in the underlying genotype:

\n
\n
\n

\n

At this level, the transformations can go either way, so this graph is undirected. But the crucial point is that as soon as one imposes a fitness function, it defines a particular direction for each transformation (at least, each transformation that isn’t fitness neutral for this fitness function). And then if one starts, say, from the null rule, one will pick out a certain “evolution cone” subgraph of the original mutation graph.

\n

So, for example, with width as the fitness function, the subgraph one gets is what’s highlighted here:

\n
\n
\n

\n

There are several subtleties here. First, we simplified the multiway graph by doing transitive reduction and drawing only the minimal edges necessary to define the connectivity of the graph. If we want to see all possible single-mutation transformations between phenotypes we need to do transitive completion, in which case for the width fitness function the multiway graph we get is:

\n
\n
\n

\n

But now there’s another subtlety. The edges in the multiway graph represent fitness-changing transformations. But there are also fitness-neutral transformations. And occasionally these can even lead to different (though equal-fitness) phenotypes, so that really each node in the graph above (say, the transitively reduced one) should sometimes be associated with multiple phenotypes

\n
\n
\n

\n

which can “fitness neutrally” transform into each other, as in:

\n
\n
\n

\n

But even this isn’t the end of the subtleties. Fitness-neutral sets typically contain many genotypes differing by changes of rule cases that don’t affect the phenotype they produce. But it may be that just one or a few of these genotypes are “primed” to be able to generate another phenotype with just one additional mutation. Or, in other words, each node in the multiway graph above represents a whole class of genotypes “equivalent under fitness-neutral transformations”, and when we draw an arrow it indicates that some genotype in that class can be transformed by a single mutation to some genotype in the class associated with a different phenotype:

\n
\n
\n

\n

But beyond the subtleties, the key point is that particular fitness functions in effect just define particular orderings on the underlying mutation graph. It’s somewhat like choices of reference frames or families of simultaneity surfaces in physics. Different choices of fitness function in effect define different ways in which the underlying mutation graph can be “navigated” by evolution over the course of time.

\n

As it happens, the results are not so different between height and width fitness functions. Here’s a combined multiway graph, indicating transformations variously allowed by these different fitness functions:

\n
\n
\n

\n

Homing in on a small part of this graph, we see that there are different “flows” associated with maximizing height and maximizing width:

\n
\n
\n

\n

With a single fitness function that for any two phenotypes systematically treats one phenotype as fitter than another, the multiway graph must always define a definite flow. But as soon as one considers changing fitness functions in the course of evolution, it’s possible to get cycles in the multiway graph, as in the example above—so that, in effect, “evolution can repeat itself”.

\n

Fitness Functions Based on Aspect Ratio

\n

We’ve looked at fitness functions based on maximizing height and on maximizing width. But what if we try to combine these? Here’s a plot of the widths and heights of all phenotypes that occur in the symmetric k = 2, r = 2 case we studied above:

\n
\n
\n

\n

We could imagine a variety of ways to define “fitness frontiers” here. But as a specific example, let’s consider fitness functions that are based on trying to achieve specific aspect ratios—i.e. phenotypes that are as close as possible to a particular constant-aspect-ratio line in the plot above.

\n

With the symmetric k = 2, r = 2 rules we’re using here, only a certain set of aspect ratios can ever be obtained:

\n
\n
\n

\n
\n
\n

\n

The corresponding phenotypes (with their aspect ratios) are:

\n
\n
\n

\n

As we change the aspect ratio that we’re trying to achieve, the evolution multiway graph will change:

\n
\n
\n

\n

In all cases we’re starting from the null rule. For target aspect ratio 1.0 this rule itself already achieves that aspect ratio—so the multiway graph in that case is trivial. But in general, different aspect ratios yield evolution multiway graphs that are different subgraphs of the complete mutation graph we saw above.

\n

So if we follow all possible paths of evolution, how close can we actually get to any given target aspect ratio? This plot shows what final aspect ratios can be achieved as a function of target aspect ratio:

\n
\n
\n

\n

And in a sense this is a summary of the effect of “developmental constraints” for “adaptive cellular automaton organisms” like this. If there were no constraints then for every target aspect ratio it’d be possible to get an “organism” with that aspect ratio—so in the plot there’d be a point lying on the red line. But in actuality the process of cellular automaton growth imposes constraints—that in particular allows only certain phenotypes, with certain aspect ratios, to exist. And beyond that, which phenotypes can actually be reached by adaptive evolution depends on the evolution multiway graph, with “different turns” on the graph leading to different fitness (i.e. different aspect ratio) phenotypes.

\n

But what the plot above shows overall is that for a certain range of target aspect ratios, adaptive evolution is successfully able to get at least close to those aspect ratios. If the target aspect ratio gets out of that range, however, “developmental constraints” come in that prevent the target from being reached.

\n

With “larger genomes”, i.e. rules with larger numbers of cases to specify, it’s possible to do better, and to more accurately achieve particular aspect ratios, over larger ranges of values. And indeed we can see some version of this effect even for symmetric k = 2, r = 2 rules by plotting aspect ratios that can be achieved as a function of the number of cases that need to be specified in the rule:

\n
\n
\n

\n

As an alternative visualization, we can plot the “best convergence to the target” as a function of the number of rule cases—and once again we see that larger numbers of rule cases let us get closer to target aspect ratios:

\n
\n
\n

\n

It’s worth mentioning that—just as we discussed for height and width fitness functions above—there are subtleties here associated with fitness-neutral sets. For example, here are sets of phenotypes that all have the specified aspect ratios—with phenotypes that can be reached by single point mutations being joined:

\n
\n
\n

\n

In the evolution multiway graphs above, we included only one phenotype for each fitness-neutral set. But here’s what we get for target aspect ratio 0.7 if we show all phenotypes with a given fitness:

\n
\n
\n

\n

Note that on the top line, we don’t just get the null rule. Instead, we get four phenotypes, all of which, like the null rule, have aspect ratio 1, and so are equally far from the target aspect ratio 0.7.

\n

The picture above is only the transitively reduced graph. But if we include all possible transformations associated with single point mutations, we get instead:

\n
\n
\n

\n

Based on this graph, we can now make what amounts to a foliation, showing collections of phenotypes reached by a certain minimum number of mutations, progressively approaching our target aspect ratio (here 0.7):

\n
\n
\n

\n

Here’s what we get from the range of target aspect ratios shown above (where, as above, “terminal phenotypes” are highlighted):

\n
\n
\n

\n

In a sense these sequences show us what phenotypes can appear at progressive stages in the “fossil record” for different (aspect-ratio) fitness functions in our very simple model. The highlighted cases are “evolutionary dead ends”. The others can evolve further.

\n

Unreachable Cases

\n

Our model takes the process of adaptive evolution to never “go backwards”, or, in other words, to never evolve from a particular genotype to one with lower fitness. But this means that starting with a certain genotype (say the null rule) there may be genotypes (and hence phenotypes) that will never be reached.

\n

With height as a fitness function, there are just two single (“orphan”) phenotypes that can’t be reached:

\n
\n
\n

\n

And with width as the fitness function, it turns out the very same phenotypes also can’t be reached:

\n
\n
\n

\n

But if we use a fitness function that, for example, tries to achieve aspect ratio 0.7, we get many more phenotypes that can’t be reached starting from the null rule:

\n
\n
\n

\n

In the original mutation graph all the phenotypes appear. But when we foliate (or, more accurately, order) that graph using a particular fitness function, some phenotypes become unreachable by evolutionarily-possible transformations—in a rough analogy to the way some events in physics can become unreachable in the presence of an event horizon.

\n

Multiway Graphs for Larger Rule Spaces

\n

So far we’ve discussed multiway graphs here only for symmetric k = 2, r = 2 rules. There are a total of 524,288 (= 219) possible such rules, producing 77 distinct phenotypes. But what about larger classes of rules? As an example, we can consider all k = 2, r = 2 rules, without the constraint of symmetry. There are 2,147,483,648 (= 231) possible such rules, and there turn out to be3137distinct phenotypes.

\n

For the height fitness function, the complete multiway graph in this case is

\n
\n
\n

\n

or, annotated with actual phenotypes:

\n
\n\n

\n

If instead we just show bounding boxes, it’s easier to see where long-lifetime phenotypes occur:

\n
\n
\n

\n

With a different graph layout the evolution multiway graph (with initial node indicated) becomes:

\n
\n
\n

\n

One subtlety here is that the null rule has no successors with single point mutation. When we were talking about symmetric k = 2, r = 2 rules, we took a “single point mutation” always to change both a particular rule case and its mirror image. But if we don’t have the symmetry requirement, a single point mutation really can just change a single rule case. And if we start from the null range and look at the results of changing just one bit (i.e. the output of just one rule case) in all possible ways we find that we either get the same pattern as with the null rule, or we get a pattern that grows without bound:

\n
\n
\n

\n

Or, put another way, we can’t get anywhere with single bit mutations starting purely from the null rule. So what we’ve done is instead to start our multiway graph from k = 2, r = 2 rule 20, which has two bits “on”, and gives phenotype:

\n
\n
\n

\n

But starting from this, just one mutation (together with a sequence of fitness-neutral mutations) is sufficient to give94phenotypes—or49after removing mirror images:

\n
\n
\n

\n

The total number of new phenotypes we can reach after successively more (non-fitness-neutral) mutations is

\n
\n
\n

\n

while the successive longest-lifetime patterns are:

\n
\n
\n

\n

And what we see here is that it’s in principle possible to achieve long lifetimes even with fairly few mutations. But when the mutations are done at random, it can still take a very large number of steps to successfully “random walk” to long lifetime phenotypes.

\n

And out of a total of2407distinct phenotypes,984are “dead ends” where no further evolution is possible. Some of these dead ends have long lifetimes

\n
\n
\n

\n

but others have very short lifetimes:

\n
\n
\n

\n

There’s much more to explore in this multiway graph—and we’ll continue a bit below. But for now let’s look at another evolution multiway graph of accessible size: the one for symmetric k = 3, r = 1 rules. There are a total of 129,140,163 (= 317) possible such rules, that yield a total of14,778distinct phenotypes:

\n
\n\n

\n

Showing only bounding boxes of patterns this becomes:

\n
\n
\n

\n

Unlike the k = 2, r = 2 case, we can now start this whole graph with the null rule. However, if we look at all possible symmetric k = 3, r = 1 rules, there turn out to be 6 “isolates” that can’t be reached from the null rule by adaptive evolution with the height fitness function:

\n
\n
\n

\n

Starting from the null rule, the number of phenotypes reached after successively more (non-fitness-neutral) mutations is

\n
\n
\n

\n

and the successive longest-lived of these phenotypes are:

\n
\n
\n

\n

Aspect Ratio Fitness

\n

Just as we looked at fitness functions based on aspect ratio above for symmetric k = 2, r = 2 rules, so now we can do this for the whole space of all possible k = 2, r = 2 rules. Here’s a plot of the heights and widths of patterns that can be achieved with these rules:

\n
\n
\n

\n

These are the possible aspect ratios this implies:

\n
\n
\n

\n

And here’s their distribution (on a log scale):

\n
\n
\n

\n

The range of possible values extends much further than for symmetric k = 2, r = 2 rules: to rather than to . The patterns now with the largest aspect ratios are

\n
\n
\n

\n

while those with the smallest aspect ratios are:

\n
\n
\n

\n

Note that just as for symmetric k = 2, r = 2 rules, to reach a wider range of aspect ratios, more cases in the rule have to be specified:

\n
\n
\n

\n

So what happens if we use adaptive evolution to try to reach different possible target aspect ratios? Most of the time (at least up to aspect ratio ≈ 3) there’s some sequence of mutations that will do it—though often we can get stuck at a different aspect ratio:

\n
\n
\n

\n

If we look at the “best convergence” to a given target aspect ratio then we see that this improves as we increase the number of cases specified in the rule:

\n
\n
\n

\n

So what does the multiway graph look like for a fitness function associated with a particular aspect ratio? Here’s the result for aspect ratio 3:

\n
\n
\n

\n

The initial node involves patterns with aspect ratio 1—actually a fitness-neutral set of 263 of them. And as we go through the multiway graph, the aspect ratios get nearer to 3. The very closest they get, though, are for the patterns (whose locations are indicated on the graph):

\n
\n
\n

\n

But actually (as we saw in the lineup above), there is a rule that gives aspect ratio exactly 3:

\n
\n
\n

\n

But it turns out that this rule can’t be reached by adaptive evolution using single point mutations. In effect, adaptive evolution isn’t “strong enough” to achieve the exact aspect ratio we want; we can think of it as being “unpredictably prevented” by computationally irreducible “developmental constraints”.

\n

OK, so what about the symmetric k = 3, r = 1 rules? Here’s how they’re distributed in width and height:

\n
\n
\n

\n

And, yes, in a typical “there are always surprises” story, there’s a strange height 265, width 173 pattern that shows up:

\n
\n
\n

\n

The overall possible aspect ratios are now

\n
\n
\n

\n

and their (log) distribution is:

\n
\n
\n

\n

The phenotypes with the largest aspect ratios are

\n
\n
\n

\n

while those with the smallest aspect ratios are:

\n
\n
\n

\n

Once again, to reach a larger range of aspect ratios, one has to specify more cases in the rule:

\n
\n
\n

\n

If we try to target a certain aspect ratio, there’s somewhat more of a tendency to get stuck than for k = 2, r = 2 rules—perhaps somewhat as a result of there now being fewer total rules (though more phenotypes) available:

\n
\n
\n

\n

Branching in the Multiway Evolution Graph

\n

Looking at a typical multiway evolution graph such as

\n
\n
\n

\n

we see that different phenotypes can be quite separated in the graph—a bit like organisms on different branches of the tree of life in actual biology. But how can we characterize this separation? One approach is to compute the so-called dominator tree of the graph:

\n
\n
\n

\n

We can think of this as a way to provide a map of the least common ancestors of all nodes. The tree is set up so that given two nodes you just trace up the tree to find their common ancestor. Another interpretation of the tree is that it shows you what nodes you have no choice but to pass through in getting from the initial node to any given node—or, in other words, what phenotypes adaptive evolution has to produce on the way to a given phenotype.

\n

Here’s another rendering of the tree:

\n
\n
\n

\n

We can think of this as the analog of the biological tree of life, with successive branchings picking out finer and finer “taxonomic domains” (analogous to kingdoms, phyla, etc.)

\n

The tree also shows us something else: how significant different links or nodes are—and how much of the tree one would “lop off” if they were removed. Or, put a different way, how much would be achieved by blocking a certain link or node—as one might imagine doing to try to block the evolution of bacteria or tumor cells?

\n

What if we look at larger multiway evolution graphs, like the complete k = 2, r = 2 one? Once again we can construct a dominator tree:

\n
\n
\n

\n

It’s notable that there’s tremendous variance in the “fan out” here, with the phenotypes with largest successor counts being the rather undistinguished:

\n
\n
\n

\n

But what if one’s specifically trying to reach, say, one of the maximum lifetime (length308) phenotypes? Well, then one has to follow the paths in a particular subgraph of the original multiway evolution graph

\n
\n
\n

\n

corresponding to the phenotype graph:

\n
\n
\n

\n

If one goes off this “narrow path” then one simply can’t reach the length-308 phenotype; one inevitably gets stuck in what amounts to another branch of the analog of the “tree of life”. So if one is trying to “guide evolution” to a particular outcome, this tells one that one needs to block off lots of “exit ramps”.

\n

But what “fraction of the whole graph” is the subgraph that leads to the length-308 phenotype? The whole graph has2409vertices and 3878 edges, while the subgraph has 64 vertices and 119 edges, i.e. in both cases about 3%. A different measure is what fraction of all paths through the graph lead to the length-308 phenotype. The total number of paths is606,081,while the number leading to the length-308 phenotype is 1260, or about 0.2%. Does this tell us what the probability of reaching that phenotype will be if we just make a random sequence of mutations? Not quite, because in the multiway evolution graph many equivalencings have been done, notably for fitness-neutral sets. And if we don’t do such equivalencings, it turns out (as we’ll discuss below) that the corresponding number is significantly smaller—about 0.007%.

\n

Exact-Match Fitness Functions

\n

The fitness functions we’ve been considering so far look only at coarse features of phenotype patterns—like their height, width and aspect ratio. But what happens if we have a fitness function that’s maximal only for a phenotype that exactly matches a particular pattern?

\n

As an example, let’s consider k = 2, r = 1 cellular automata with phenotypes grown for a specific number of steps—and with a fitness function that counts the number of cells that agree with ones in a target:

\n
\n
\n

\n

Let’s say we start with the null rule, then adaptively evolve by making single point mutations to the rule (here just 8 bits). With a target of the rule 30 pattern, this is the multiway graph we get:

\n
\n
\n

\n

And what we see is that after a grand tour of nearly a third of all possible rules, we can successfully reach the rule 30 pattern. But we can also get stuck at rule 86 and rule 190 patterns—even though their fitness values are much lower:

\n
\n
\n

\n

If we consider all possible k = 2, r = 1 cellular automaton patterns as targets, it turns out that these can always be reached by adaptive evolution from the null rule—though a little less than half the time there are other possible endpoints (here specified by rule numbers) at which the evolution process can get stuck:

\n
\n
\n

\n

So far we’ve been assuming that we have a fitness function that’s maximized by matching some pattern generated by a cellular automaton pattern. But what if we pick some quite different pattern to match against? Say our pattern is:

\n
\n
\n

\n

With k = 2, r = 1 rules (running with wraparound in a finite-size region), we can construct a multiway graph

\n
\n
\n

\n

and find out that the maximum fitness endpoints are the not-very-good approximations:

\n
\n
\n

\n

We can also get to these by applying random mutations:

\n
\n
\n

\n

But what if we try a larger rule space, say k = 2, r = 2 rules? Our approximations to the “A” image get a bit better:

\n
\n
\n

\n

Going to k = 2, r = 3 leads to slightly better (but not great) final approximations:

\n
\n
\n

\n

If we try to do the same thing with our target instead being

\n
\n
\n

\n

we get for example

\n
\n
\n

\n

while with target

\n
\n
\n

\n

we get (even less convincing) results like:

\n
\n
\n

\n

What’s going on here? Basically it’s that if we try to set up too intricate a fitness function, then our rule spaces won’t contain rules that successfully maximize it, and our adaptive evolution process will end up with a variety of not-very-good approximations.

\n

How Fitness Builds Up

\n

When one looks at an evolution process like

\n
\n
\n

\n

one typically has the impression that successive phenotypes are achieving greater fitness by somehow progressively “building on the ideas” of earlier ones. And to get a more granular sense of this we can highlight cells at each step that are using “newly added cases” in the rule:

\n
\n
\n

\n

We can think of new rule cases as a bit like new genes in biology. So what we’re seeing here is the analog of new genes switching on (or coming into existence) as we progress through the process of biological evolution.

\n

Here’s what happens for some other paths of evolution:

\n
\n
\n

\n

What we see is quite variable. There are a few examples where new rule cases show up only at the end, as if a new “incrementally engineered” pattern was being “grafted on at the end”. But most of the time new rule cases show up sparsely dotted all over the pattern. And somehow those few “tweaks” lead to higher fitness—even though there’s no obvious reason why, and no obvious way to predict where they should be.

\n

It’s interesting to compare this with actual biology, where it’s pretty common to see what appear to be “random gratuitous changes” between apparently very similar organisms. (And, yes, this can lead to all sorts of problems in things like comparing toxicity or drug effectiveness in model animals versus humans.)

\n

There are many ways to consider quantitatively characterizing how “rule utilization” builds up. As just one example, here are plots for successive phenotypes along the evolution paths shown above of what stages in growth new rule cases show up:

\n
\n
\n

\n
\n
\n

\n

But Is It Explainable?

\n

Here are two “adaptively evolved” long-lifetime rules that we discussed at the beginning:

\n
\n
\n

\n

We can always run these rules and see what patterns they produce. But is there a way to explain what they do? And for example to analyze how they manage to yield lifetimes? Or is what we’re seeing in these rules basically “pure computational irreducibility” where the only way to tell what patterns they will generate—and how long they’ll live—is just explicitly to run them step by step?

\n

The second rule here seems to have a bit more regularity than the first, so let’s tackle it first. Let’s look at the “blade” part. Once such an object—of any width—has formed, its behavior will basically be repetitive, and it’s easy to predict what will happen:

\n
\n
\n

\n

The left-hand edge moves by 1 position every 7 steps, and the right-hand edge by 4 positions every 12 steps. And since , however wide the initial configuration is, it’ll always die out, after a number of steps that’s roughly times the initial width.

\n

But OK, how does a configuration like this get produced? Well, that’s far from obvious. Here’s what happens with a sequence of few-cell initial conditions …:

\n
\n
\n

\n

So, yes, it doesn’t always directly make the “blade”. Sometimes, for example, it instead makes things like these, some of which basically just become repetitive, and live forever:

\n
\n
\n

\n

And even if it starts with a “blade texture” unexpected things can happen:

\n
\n
\n

\n

There are repetitive patterns that can persist—and indeed the “blade” uses one of these:

\n
\n
\n

\n

Starting from a random initial condition one sees various kinds of behavior, with the blade being fairly common:

\n
\n
\n

\n

But none of this really makes much of a dent in “explaining” why with this rule, starting from a single red cell, we get a long-lived pattern. Yes, once the “blade” forms, we know it’ll take a while to come to a point. But beyond this little pocket of computational reducibility we can’t say much in general about what the rule does—or why, for example, a blade forms with this initial condition.

\n

So what about our other rule? There’s no obvious interesting pocket of reducibility there at all. Looking at a sequence of few-cell initial conditions we get:

\n
\n
\n

\n

And, yes, there’s all sorts of different behavior that can occur:

\n
\n
\n

\n

The first of these patterns is basically periodic, simply shifting 2 cells to the left every 56 steps. The third one dies out after 369 steps, and the fourth one becomes basically periodic (with period 56) after 1023 steps:

\n
\n
\n

\n

If we start from a random initial condition we see a few places where things die out in a repeatable pattern. But mostly everything just looks very complicated:

\n
\n
\n

\n

As always happens, the rule supports regions of repetitive behavior, but they don’t normally extend far enough to introduce any significant computational reducibility:

\n
\n
\n

\n

So what’s the conclusion? Basically it’s that these rules—like pretty much all others we’ve seen here—behave in essentially computationally irreducible ways. Why do they have long lifetimes? All we can really say is “because they do”. Yes, we can always run them and see what happens. But we can’t make any kind of “explanatory theory”, for example of the kind we’re used to in mathematical approaches to physics.

\n

Distribution in Morphospace

\n

We can think of the pattern of growth seen in each phenotype as defining what we might call in biology its “morphology”. So what happens if we try to operate as “pure taxonomists”, laying out different phenotypes in “morphospace”? Here’s a result based on using machine learning and FeatureSpacePlot:

\n
\n
\n

\n

And, yes, this tends to group “visually similar” phenotypes together. But how does proximity in morphospace relate to proximity in genotypes? Here is the same arrangement of phenotypes as above, but now indicating the transformations associated with single mutations in genotype:

\n
\n
\n

\n

If for example we consider maximizing for height, only some of the phenotypes are picked out:

\n
\n
\n

\n

For width, a somewhat different set are picked out:

\n
\n
\n

\n

And here is what happens if our fitness function is based on aspect ratio:

\n
\n
\n

\n

In other words, different fitness functions “select out” different regions in morphospace.

\n

We can also construct a morphospace not just for symmetric but for all k = 2, r = 2 rules:

\n
\n
\n

\n

The detailed pattern here is not particularly significant, and, more than anything, just reflects the method of dimension reduction that we’ve used. What is more meaningful, however, is how different fitness functions select out different regions in morphospace. This shows the results for fitness functions based on height and on width—with points colored according to the actual values of height and width for those phenotypes:

\n
\n
\n

\n

Here are the corresponding results for fitness functions based on different aspect ratios, where now the coloring is based on closeness to the target aspect ratio:

\n
\n
\n

\n

What’s the main conclusion here? We might have expected that different fitness functions would cleanly select visibly different parts of morphospace. But at least with our machine-learning-based way of laying out morphospace that’s not what we’re seeing. And it seems likely that this is actually a general result—and that there is no layout procedure that can make any “easy to describe” fitness function “geometrically simple” in morphospace. And once again, this is presumably a consequence of underlying computational irreducibility—and to the fact that we can’t expect any morphospace layout procedure to be able to provide a way to “untangle the irreducibility” that will work for all fitness functions.

\n

Probabilities and the Time Course of Evolution

\n

In what we’ve done so far, we’ve mostly been concerned with things like what sequences of phenotypes can ever be produced by adaptive evolution. But in making analogies to actual biological evolution—and particularly to how it’s captured in the fossil record—it’s also relevant to discuss time, and to ask not only what phenotypes can be produced, but also when, and how frequently.

\n

For example, let’s assume there’s a constant rate of point mutations in time. Then starting from a given rule (like the null rule) there’ll be a certain rate at which transitions to other rules occur. Some of these transitions will lead to rules that are selected out. Others will be kept, but will yield the same phenotype. And still others will lead to transitions to different phenotypes.

\n

We can represent this by a “phenotype transition diagram” in which the thickness of each outgoing edge from a given phenotype indicates the fraction of all possible mutations that lead to the transition associated with that edge:

\n
\n
\n

\n

Gray self-loops in this diagram represent transitions that lead back to the same phenotype (because they change cases in the rule that don’t matter). Pink self-loops correspond to transitions that lead to rules that are selected out. We don’t show rules that have been selected out here; instead we assume that in this case we just “wait at the original phenotype” and don’t make a transition.

\n

We can annotate the whole symmetric k = 2, r = 2 multiway evolution graph with transition probabilities:

\n
\n
\n

\n

Underlying this graph is a matrix of transition probabilities between all 219 possible symmetric k = 2, r = 2 rules (where the structure reflects the fact that many rules transform to rules which differ only by one bit):

\n
\n
\n

\n

Keeping only distinct phenotypes and ordering by lifetime, we can then make a matrix of phenotype transition probabilities:

\n
\n
\n

\n

Treating the transitions as a Markov process, this allows us to compute the expected frequency of each phenotype as a function of time (i.e. number of mutations):

\n
\n
\n

\n

What’s basically happening here is that there’s steady evolution away from the single-cell phenotype. There are some intermediate phenotypes that come and go, but in the end, everything “flows” to the final (“leaf”) phenotypes on the multiway evolution graph—leading to a limiting “equilibrium” probability distribution:

\n
\n
\n

\n

Stacking the different curves, we get an alternative visualization of the evolution of phenotype frequencies:

\n
\n
\n

\n

If we were “running evolution” with enough separate individuals, these would be the limiting curves we’d get. If we reduced the number of individuals, we’d start to see fluctuations—and there’d be a certain probability, for example, for a particular phenotype to end up with zero individuals, and effectively go extinct.

\n

So what happens with a different fitness function? Here’s the result using width instead of height:

\n
\n
\n

\n

And here are results for fitness functions based on a sequence of targets for aspect ratio:

\n
\n
\n

\n

And, yes, the fitness function definitely influences the time course of our adaptive evolution process.

\n

So far we’ve been looking only at symmetric k = 2, r = 2 rules. If we look at the space of all possible k = 2, r = 2 rules, the behavior we see is similar. For example, here’s the time evolution of possible phenotypes based on our standard height fitness function:

\n
\n
\n

\n

And this is what we see if we look only at the longest-lifetime (i.e. largest-height) cases:

\n
\n
\n

\n

As the scale here indicates, such long-lived phenotypes are quite rare—though most still occur with nonzero frequency even after arbitrarily large times (which is an inevitable given that they appear as “maximal fitness” terminal nodes in the multiway graph).

\n

And indeed if we plot the final frequencies of phenotypes against their lifetimes we see that there are a wide range of different cases:

\n
\n
\n

\n

The phenotypes with the highest “equilibrium” frequencies are

\n
\n
\n

\n

with some having fairly small lifetimes, and others larger.

\n

The Macroscopic Flow of Evolution

\n

In the previous section, we looked at the time course of evolution with various different—but fixed—fitness functions. But what if we had a fitness function that changes with time—say analogous to an environment for biological evolution that changes with time?

\n

Here’s what happens if we have an aspect ratio fitness function whose target value increases linearly with time:

\n
\n
\n

\n

The behavior we see is quite complex, with certain phenotypes “winning for a while” but then dying out, often quite precipitously—with others coming to take their place.

\n

If instead the target aspect ratio decreases with time, we see rather different behavior:

\n
\n
\n

\n

(The discontinuous derivatives here are basically associated with the sudden appearance of new phenotypes at particular target aspect ratio values.)

\n

It’s also possible to give a “shock to the system” by suddenly changing the target aspect ratio:

\n
\n
\n

\n
\n
\n

\n

And what we see is that sometimes this shock leads to fewer surviving phenotypes, and sometimes to more.

\n

We can think of a changing fitness function as being something that applies a “macroscopic driving force” to our system. Things happen quickly down at the level of individual mutation and selection events—but the fitness function defines overall “goals” for the system that in effect change only slowly. (It’s a bit like a fluid where there are fast molecular-scale processes, but typically slow changes of macroscopic parameters like pressure.)

\n

But if the fitness function defines a goal, how well does the system manage to meet it? Here’s a comparison between an aspect ratio goal (here, linearly increasing) and the distribution of actual aspect ratios achieved, with the darker curve indicating the mean aspect ratio obtained by a weighted average over phenotypes, and the lighter blue area indicating the standard deviation:

\n
\n
\n

\n

And, yes, as we might have expected from earlier results, the system doesn’t do particularly well at achieving the goal. Its behavior is ultimately not “well sculpted” by the forces of a fitness function; instead it is mostly dominated by the intrinsic (computationally irreducible) dynamics of the underlying adaptive evolution process.

\n

One important thing to note however is that our results depend on the value of a parameter: essentially the rate at which underlying mutations occur relative to the rate of change of the fitness function. In the picture above 5000 mutations occur over the time the fitness function goes from minimum to maximum value. This is what happens if we change the number of mutations that occur (or, in effect, the “mutation rate”):

\n
\n
\n

\n

Generally—and not surprisingly—adaptive evolution does better at achieving the target when the mutation rate is higher, though in both the cases shown here, nothing gets terribly close to the target.

\n

In their general character our results here seem reminiscent of what one might expect in typical studies of continuum systems, say based on differential equations. And indeed one can imagine that there might be “continuum equations of adaptive evolution” that govern situations like the ones we’ve seen here. But it’s important to understand that it’s far from self evident that this is possible. Because underneath everything is a multiway evolution graph with a definite and complicated structure. And one might think that the details of this structure would matter to the overall “continuum evolution process”. And indeed sometimes they will.

\n

But—as we have seen throughout our Physics Project—underlying computational irreducibility leads to a certain inevitable simplicity when looking at phenomena perceived by computationally bounded observers. And we can expect that something similar can happen with biological evolution (and indeed adaptive evolution in general). Assuming that our fitness functions (and their process of change) are computationally bounded, then we can expect that their “aggregate effects” will follow comparatively simple laws—which we can perhaps think of as laws for the “flow of evolution” in response to external input.

\n

Can Evolution Be Reversed?

\n

In the previous section we saw that with different fitness functions, different time series of phenotypes appear, with some phenotypes, for example, sometimes “going extinct”. But let’s say evolution has proceeded to a certain point with a particular fitness function—and certain phenotypes are now present. Then one question we can ask is whether it’s possible to “reverse” that evolution, and revert to phenotypes that were present before. In other words, if we change the fitness function, can we make evolution “go backwards”?

\n

We’ve often discussed a fitness function based on maximizing total (finite) lifetime. But what if, after using this fitness function for a while, we “reverse it”, now minimizing total lifetime?

\n

Consider the multiway evolution graph for symmetric k = 2, r = 2 rules starting from the null rule, with the fitness function yet again being maximizing lifetime:

\n
\n
\n

\n

But what if we now say the fitness function minimizes lifetime? If we start from the longest-lifetime phenotype we get the “lifetime minimization” multiway graph:

\n
\n
\n

\n

We can compare this “reversed graph” to the “forward graph” based on all paths from the null rule to the maximum-lifetime rule:

\n
\n
\n

\n

And in this case we see that the phenotypes that occur are almost the same, with the exception of the fact that can appear in the reverse case.

\n

So what happens when we look at all k = 2, r = 2 rules? Here’s the “reverse graph” starting from the longest-lifetime phenotype:

\n
\n
\n

\n

A total of 345 phenotypes appear here eventually leading all the way back to . In the overall “forward graph” (which has to start from rather than ) a total of2409phenotypes appear, though (as we saw above) only 64 occur in paths that eventually lead to the maximum lifetime phenotype:

\n
\n
\n

\n

And what we see here is that the forward and reverse graphs look quite different. But could we perhaps construct a fitness function for the reverse graph that will successfully corral the evolution process to precisely retrace the steps of the forward graph?

\n

In general, this isn’t something we can expect to be able to do. Because to do so would in effect require “breaking the computational irreducibility” of the system. It would require having a fitness function that can in essence predict every detail of the evolution process—and in so doing be in a position to direct it. But to achieve this, the fitness function would in a sense have to be computationally as sophisticated as the evolution process itself.

\n

It’s a variant of an argument we’ve used several times here. Realistic fitness functions are computationally bounded (and in practice often very coarse). And that means that they can’t expect to match the computational irreducibility of the underlying evolution process.

\n

There’s an analogy to the Second Law of thermodynamics. Just as the microscopic collisions of individual molecules are in principle easy to reverse, so potentially are individual transitions in the evolution graph. But putting many collisions or many transitions together leads to a process that is computationally sophisticated enough that the fairly coarse means at our disposal can’t “decode” and reverse it.

\n

Put another way, there is in practice a certain inevitable irreversibility to both molecular dynamics and biological evolution. Yes, with enough computational effort—say carefully controlling the fitness function for every individual organism—it might in principle be possible to precisely “reverse evolution”. But in practice the kinds of fitness functions that exist in nature—or that one can readily set up in a lab—are computationally much too weak. And as a result one can’t expect to be able to get evolution to precisely retrace its steps.

\n

Random or Selected? Can One Tell?

\n

Given only a genotype, is there a way to tell whether it’s “just random” or whether it’s actually the result of some long and elaborate process of adaptive evolution? From the genotype one can in principle use the rules it defines to “grow” the corresponding phenotype—and then look at whether it has an “unusually large” fitness. But the question is whether it’s possible to tell anything directly from the genotype, without going through the computational effort of generating the phenotype.

\n

At some level it’s like asking, whether, say, from a cellular automaton rule, one can predict the ultimate behavior of the cellular automaton. And a core consequence of computational irreducibility is that one can’t in general expect to do this. Still, one might imagine that one could at least make a “reasonable guess” about whether a genotype is “likely” to have been chosen “purely randomly” or to have been “carefully selected”.

\n

To explore this, we can look at the genotypes for symmetric k = 2, r = 2 rules, say ordered by their lifetime-based fitness—with black and white here representing “required” rule cases, and gray representing undetermined ones (which can all independently be either black or white):

\n
\n
\n

\n

On the right is a summary of how many white, black and undetermined (gray) outcomes are present in each genotype. And as we have seen several times, to achieve high fitness all or almost all of the outcomes must be determined—so that in a sense all or almost all of the genome is “being used”. But we still need to ask whether, given a certain actual pattern of outcomes, we can successfully guess whether or not a genotype is the result of selection.

\n

To get more of a sense of this, we can look at plots of the probabilities for different outcomes for each case in the rule, first (trivially) for all combinatorially possible genotypes, then for all genotypes that give viable (i.e. in our case, finite-lifetime) phenotypes, and then for “selected genotypes”:

\n
\n
\n

\n

Certain cases are always completely determined for all viable genomes—but rather trivially so, because, for example, if then the pattern generated will expand at maximum speed forever, and so cannot have a finite lifetime.

\n

So what happens for all k = 2, r = 2 rules? Here are the actual genomes that lead to particular fitness levels:

\n
\n
\n

\n

And now here are the corresponding probabilities for different outcomes for each case in the rule:

\n
\n
\n

\n

And, yes, given a particular setup we could imagine working out from results like these at least an approximation to the likelihood for a given randomly chosen genome to be a selected one. But what’s true in general? Is there something that can be determined with bounded computational effort (i.e. without explicitly computing phenotypes and their fitnesses) that gives a good estimate of whether a genome is selected? There are good reasons to believe that computational irreducibility will make this impossible.

\n

It’s a different story, of course, if one’s given a “fully computed” phenotype. But at the genome level—without that computation—it seems unlikely that one can expect to distinguish random from “selected-somehow” genotypes.

\n

Adaptive Evolution of Initial Conditions

\n

In making our idealized model of biological evolution we’ve focused (as biology seems to) on the adaptive evolution of the genotype—or, in our case, the underlying rule for our cellular automata. But what if instead of changing the underlying rule, we change the initial condition used to “grow each organism”?

\n

For example, let’s say that we start with the “single cell” we’ve been using so far, but then at each step in adaptive evolution we change the value of one cell in the initial condition (say within a certain distance of our original cell)—then keep any initial condition that does not lead to a shorter lifetime:

\n
\n
\n

\n

The sequence of lifetimes (“fitness values”) obtained in this process of adaptive evolution is

\n
\n
\n

\n

and the “breakthrough” initial conditions are:

\n
\n
\n

\n

The basic setup is similar to what we’ve seen repeatedly in the adaptive evolution of rules rather than initial conditions. But one immediate difference is that, at least in the example we’ve just seen, changing initial conditions does not as obviously “introduce new ideas” for how to increase lifetime; instead, it gives more of an impression of just directly extending “existing ideas”.

\n

So what happens more generally? Rules with k = 2, r = 1 tend to show either infinite growth or no growth—with finite lifetimes arising only from direct “erosion” of initial conditions (here for rules 104 and 164):

\n
\n
\n

\n

For k = 2, r = 2 rules the story is more complicated, even in the symmetric case. Here are the sequences of longest lifetime patterns obtained with all possible progressively wider initial conditions with various rules:

\n
\n
\n

\n

Again, there is a certain lack of “fundamentally new ideas” in evidence, though there are definitely “mechanisms” that get progressively extended with larger initial conditions. (One notable regularity is that the maximum lifetimes of patterns often seem roughly proportional to the width of initial condition allowed.)

\n

Can adaptive evolution “discover more”? Typically, when it’s just modifying initial conditions in a fixed region, it doesn’t seem so—again it seems to be more about “extending existing mechanisms” than introducing new ones:

\n
\n
\n

\n
\n
\n

\n

2D Cellular Automata

\n

Everything we’ve done so far has been for 1D cellular automata. So what happens if we go to 2D? In the end, the story is going to be very similar to 1D—except that the rule spaces even for quite minimal neighborhoods are vastly larger.

\n

With k = 2 colors, it turns out that with a 5-cell neighborhood one can’t “escape from the null rule” by single point mutations. The issue is that any single case one adds in the rule will either do nothing, or will lead only to unbounded growth. And even with a 9-cell neighborhood one can’t get rules that show growth that is neither limited nor infinite with a single-cell initial condition. But with a initial condition this is possible, and for example here is a sequence of phenotype patterns generated by adaptive evolution using lifetime as a fitness function:

\n
\n
\n

\n

Here’s what these patterns look like when “viewed from above”:

\n
\n
\n

\n

And here’s how the fitness progressively increases in this case:

\n
\n
\n

\n

There are a total of 2512 ≈ 10154 possible 9-neighbor rules, and in this vast rule space it’s easy for adaptive evolution to find rules with long finite lifetimes. (By the way, I’ve no idea what the absolute maximum “busy beaver” lifetime in this space is.)

\n

Just as in 1D, there’s a fair amount of variation in the behavior one sees. Here are some examples of the “final rules” for various instances of adaptive evolution:

\n
\n
\n

\n

In a few cases one can readily “see the mechanism” for the lifetime—say associated with collisions between localized structures. But mostly, as in the other examples we’ve seen, there’s no realistic “narrative explanation” for how these rules achieve long yet finite lifetimes.

\n

The Turing Machine Case

\n

OK, so we’ve now looked at 2D as well as 1D cellular automata. But what about systems that aren’t cellular automata at all? Will we still see the same core phenomena of adaptive evolution that we’ve identified in cellular automata? The Principle of Computational Equivalence would certainly lead one to expect that we would. But to check at least one example let’s look at Turing machines.

\n

Here’s a Turing machine with s = 3 states for its head, and k = 2 colors for cells on its tape:

\n
\n
\n

\n

The Turing machine is set up to halt if it ever reaches a case in the rule where the output is . Starting from a blank initial condition, this particular Turing machine halts after 19 steps.

\n

So what happens if we try to adaptively evolve Turing machines with long lifetimes (i.e. that take many steps to halt)? Say we start from a “null rule” that halts in all cases, and then we make a sequence of single point mutations in the rule, keeping ones that don’t lead the Turing machine to halt in fewer steps than before. Here’s an example where the adaptive evolution eventually reaches a Turing machine that takes 95 steps to halt:

\n
\n
\n

\n

The sequence of (“breakthrough”) mutations involved here is

\n
\n
\n

\n

corresponding to a fitness curve of the form:

\n
\n
\n

\n

And, yes, all of this is very analogous to what we’ve seen in cellular automata. But one difference is that with Turing machines there are routinely much larger jumps in halting times. And the basic reason for this is just that Turing machines have much less going on at any particular step than typical cellular automata do—so it can take them much longer to achieve some particular state, like a halting state.

\n

Here’s an example of adaptive evolution in the space of s = 3, k = 3 Turing machines—and in this case the final halting time is long enough that we’ve had to squash the image vertically (by a factor of 5):

\n
\n
\n

\n

The fitness curve in this case is best viewed on a logarithmic scale:

\n
\n
\n

\n

But while the largest-lifetime cellular automata that we saw above typically seemed to have very complex behavior, the largest-lifetime Turing machine here seems, at least on the face of it, to operate in a much more “systematic” and “mechanical” way. And indeed this becomes even more evident if we compress our visualization by looking only at steps on which the Turing machine head reverses its direction:

\n
\n
\n

\n

Long-lifetime Turing machines found by adaptive evolution are not always so simple, though they still tend to show more regularity than long-lifetime cellular automata:

\n
\n
\n

\n

But—presumably because Turing machines are “less efficient” than cellular automata—the very longest possible lifetimes can be very large. It’s not clear whether rules with such lifetimes can be found by adaptive evolution—not least because even to evaluate the fitness function for any particular candidate rule could take an unbounded time. And indeed among s = 3, k = 3 rules the very longest possible is about 1017 steps—achieved by the rule

\n
\n
\n

\n

with the following “very pedantic behavior”:

\n
\n
\n

\n

So what about multiway evolution graphs? There are a total of 20,736 s = 2, k = 2 Turing machines with halting states allowed. From these there are 37 distinct finite-lifetime phenotypes:

\n
\n
\n

\n

Just as in other cases we’ve investigated, there are fitness-neutral sets such as:

\n
\n
\n

\n

Taking just one representative from each of these 18 sets, we can then construct a multiway evolution graph for 2,2 Turing machines with lifetime as our fitness function:

\n
\n
\n

\n

Here’s the analogous result for 3,2 Turing machines—with2250distinct phenotypes, and a maximum lifetime of 21 steps (and the patterns produced by the machines just show by “slabs”):

\n
\n
\n

\n

We could pick other fitness functions (like maximum pattern width, number of head reversals, etc.) But the basic structure and consequences of adaptive evolution seem to work very much the same in Turing machines as in cellular automata—much as we expect from the Principle of Computational Equivalence.

\n

Multiway Turing Machines

\n

Ordinary Turing machines (as well as ordinary cellular automata) in effect always follow a single path of history, producing a definite sequence of states based on their underlying rule. But it’s also possible to study multiway Turing machines in which many paths of history can be followed. Consider for example the rule:

\n
\n
\n

\n

The case in this rule has two possible outcomes—so this is a multiway system, and to represent its behavior we need a multiway graph:

\n
\n
\n

\n

From a biological point of view, we can potentially think of such a multiway system as an idealized model for a process of adaptive evolution. So now we can ask: can we evolve this evolution? Or, in other words, can we apply adaptive evolution to systems like multiway Turing machines?

\n

As an example, let’s assume that we make single point mutation changes to just one case in a multiway Turing machine rule:

\n
\n
\n

\n

Many multiway Turing machines won’t halt, or at least won’t halt on all their branches. But for our fitness function let’s assume we require multiway Turing machines to halt on all branches (or at least go into loops that revisit the same states), and then let’s take the fitness to be the total number of nodes in the multiway graph when everything has halted. (And, yes, this is a direct generalization of our lifetime fitness function for ordinary Turing machines.)

\n

So with this setup here are some examples of sequences of “breakthroughs” in adaptive evolution processes:

\n
\n
Breakthrough sequences
\n

\n

But what about looking at all possible paths of evolution for multiway Turing machines? Or, in other words, what about making a multiway graph of the evolution of multiway Turing machines?

\n

Here’s an example of what we get by doing this (showing at each node just a single example of a fitness-neutral set):

\n
\n
\n

\n

So what’s really going on here? We’ve got a multiway graph of multiway graphs. But it’s worth understanding that the inner and outer multiway graphs are a bit different. The outer one is effectively a rulial multiway graph, in which different parts correspond to following different rules. The inner one is effectively a branchial multiway graph, in which different parts correspond to different ways of applying a particular rule. Ultimately, though, we can at least in principle expect to encode branchial transformations as rulial ones, and vice versa.

\n

So we can think of the adaptive evolution of multiway Turing machines as a first step in exploring “higher-order evolution”: the evolution of evolution, etc. And ultimately in exploring inevitable limits of recursive evolution in the ruliad—and how these might relate to the formation of observers in the ruliad.

\n

Some Conclusions

\n

What does all this mean for the foundations of biological evolution? First and foremost, it reinforces the idea of computational irreducibility as a dominant force in biology. One might have imagined that what we see in biology must have been “carefully sculpted” by fitness constraints (say imposed by the environment). But what we’ve found here suggests that instead much of what we see is actually just a direct reflection of computational irreducibility. And in the end, more than anything else, what biological evolution seems to be doing is to “recruit” lumps of irreducible computation, and set them up so as to achieve “fitness objectives”.

\n

It is, as I recently discovered, very similar to what happens in machine learning. And in both cases this picture implies that there’s a limit to the kind of explanations one can expect to get. If one asks why something has the form it does, the answer will often just be: “because that’s the lump of irreducible computation that happened to be picked up”. And there isn’t any reason to think that there’ll be a “narrative explanation” of the kind one might hope for in traditional science.

\n

The simplicity of models makes it possible to study not just particular possible paths of adaptive evolution, but complete multiway graphs of all possible paths. And what we’ve seen here is that fitness functions in effect define a kind of traversal order or (roughly) foliation for such multiway graphs. If such foliations could be arbitrarily complex, then they could potentially pick out specific outcomes for evolution—in effect successfully “sculpting biology” through the details of natural selection and fitness functions.

\n

But the point is that fitness functions and resulting foliations of multiway evolution graphs don’t get arbitrarily complex. And even as the underlying processes by which phenotypes develop are full of computational irreducibility, the fitness functions that are applied are computationally bounded. And in effect the complexity that is perhaps the single most striking immediate feature of biological systems is therefore a consequence of the interplay between the computational boundedness of selection processes, and the computational irreducibility of underlying processes of growth and development.

\n

All of this relies on the fundamental idea that biological evolution—and biology—are at their core computational phenomena. And given this interpretation, there’s then a remarkable unification that’s emerging.

\n

It begins with the ruliad—the abstract object corresponding to the entangled limit of all possible computational processes. We’ve talked about the ruliad as the ultimate foundation for physics, and for mathematics. And we now see that we can think of it as the ultimate foundation for biology too.

\n

In physics what’s crucial is that observers like us “parse” the ruliad in certain ways—and that these ways lead us to have a perception of the ruliad that follows core known laws of physics. And similarly, when observers like us do mathematics, we can think of ourselves as “extracting that mathematics” from the way we parse the ruliad. And now what we’re seeing is that biology emerges because of the way selection from the environment, etc. “parses” the ruliad.

\n

And what makes this view powerful is that we have to assume surprisingly little about how selection works to still be able to deduce important things about biology. In particular, if we assume that the selection operates in a computationally bounded way, then just from the inevitable underlying computational irreducibility “inherited” from the ruliad, we immediately know that biology must have certain features.

\n

In physics, the Second Law of thermodynamics arises from the interplay of underlying computational irreducibility of mechanical processes involving many particles or other objects, and our computational boundedness as observers. We have the impression that “randomness is increasing” because as computationally bounded observers we can’t “decrypt” the underlying computational irreducibility.

\n

What’s the analog of this in biology? Much as we can’t expect to “say what happens” in a system that follows the Second Law, so we can’t expect to “explain by selection” what happens in a biological system. Or, put another way, much of what we see in biology is just the way it is because of computational irreducibility—and try as we might it won’t be “explainable” by some fitness criterion that we can describe.

\n

But that doesn’t mean that we can’t expect to deduce “general laws of biology”, much as there are general laws about gases whose detailed structure follows the Second Law. And in what we’ve done here we can begin to see some hints of what those general laws might look like.

\n

They’ll be things like bulk statements about possible paths of evolution, and the effect of changing the constraints on them—a bit like laws of fluid mechanics but now applied to the rulial space of possible genotypes. But if there’s one thing that’s clear it’s that the minimal model we’ve developed of biological evolution has remarkable richness and potential. In the past it’s been possible to say things about what amounts to the pure combinatorics of evolution; now we can start talking in a structured way about what evolution actually does. And in doing this we go in the direction of finally giving biology a foundation as a theoretical science.

\n

There’s So Much More to Study!

\n

Even though this is my second long piece about my minimal model of biological evolution, I’ve barely scratched the surface of what can be done with it. First and foremost there are many detailed connections to be made with actual phenomena that have been observed—or could be observed—in biology. But there are also many things to be investigated directly about the model itself—and in effect much ruliology to be done on it. And what’s particularly notable is how accessible a lot of that ruliology is. (And, yes, you can click any picture here to get the Wolfram Language code that generates it.) What are some obvious things to do? Here are few. Investigate other fitness functions. Other rule spaces. Other initial conditions. Other evolution strategies. Investigate evolving both rules and initial conditions. Investigate different kinds of changes of fitness functions during evolution. Investigate the effect of having a much larger rule space. Investigate robustness (or not) to perturbations.

\n

In what I’ve done here, I’ve effectively aggregated identical genotypes (and phenotypes). But one could also investigate what happens if one in effect “traces every individual organism”. The result will be abstract structures that generalize the multiway systems we’ve shown here—and that are associated with higher levels of abstract formalism capable of describing phenomena that in effect go “below species”.

\n

For historical notes see here »

\n

Thanks

\n

Thanks to Wolfram Institute fellows Richard Assar and Nik Murzin for their help, as well as to the supporters of the new Wolfram Institute initiative in theoretical biology. Thanks also to Brad Klee for his help. Related student projects were done at our Summer Programs this year by Brian Mboya, Tadas Turonis, Ahama Dalmia and Owen Xuan.

\n

Since writing my first piece about biological evolution in March, I’ve had occasion to attend two biology conferences: SynBioBeta and WISE (“Workshop on Information, Selection, and Evolution” at the Carnegie Institution). I thank many attendees at both conferences for their enthusiasm and input. Curiously, before the WISE conference in October 2024 the last conference I had attended on biological evolution was more than 40 years earlier: the June 1984 Mountain Lake Conference on Evolution and Development.

\n", + "category": "Biology", + "link": "https://writings.stephenwolfram.com/2024/12/foundations-of-biological-evolution-more-results-more-surprises/", + "creator": "Stephen Wolfram", + "pubDate": "Thu, 05 Dec 2024 23:13:27 +0000", + "enclosure": "", + "enclosureType": "", + "image": "", + "id": "", + "language": "en", + "folder": "", + "feed": "wolfram", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "d699917ce9758bf2a819a95af2fb14d7", + "highlights": [] + }, + { + "title": "On the Nature of Time", + "description": "\"\"The Computational View of Time Time is a central feature of human experience. But what actually is it? In traditional scientific accounts it’s often represented as some kind of coordinate much like space (though a coordinate that for some reason is always systematically increasing for us). But while this may be a useful mathematical description, […]", + "content": "\"\"

The Computational View of Time

\n

Time is a central feature of human experience. But what actually is it? In traditional scientific accounts it’s often represented as some kind of coordinate much like space (though a coordinate that for some reason is always systematically increasing for us). But while this may be a useful mathematical description, it’s not telling us anything about what time in a sense “intrinsically is”.

\n

We get closer as soon as we start thinking in computational terms. Because then it’s natural for us to think of successive states of the world as being computed one from the last by the progressive application of some computational rule. And this suggests that we can identify the progress of time with the “progressive doing of computation by the universe”.

\n

But does this just mean that we are replacing a “time coordinate” with a “computational step count”? No. Because of the phenomenon of computational irreducibility. With the traditional mathematical idea of a time coordinate one typically imagines that this coordinate can be “set to any value”, and that then one can immediately calculate the state of the system at that time. But computational irreducibility implies that it’s not that easy. Because it says that there’s often essentially no better way to find what a system will do than by explicitly tracing through each step in its evolution.

\n

In the pictures on the left there’s computational reducibility, and one can readily see what state will be after any number of steps t. But in the pictures on the right there’s (presumably) computational irreducibility, so that the only way to tell what will happen after t steps is effectively to run all those steps:

\n
\n
\n

\n

And what this implies is that there’s a certain robustness to time when viewed in these computational terms. There’s no way to “jump ahead” in time; the only way to find out what will happen in the future is to go through the irreducible computational steps to get there.

\n

There are simple idealized systems (say with purely periodic behavior) where there’s computational reducibility, and where there isn’t any robust notion of the progress of time. But the point is that—as the Principle of Computational Equivalence implies—our universe is inevitably full of computational irreducibility which in effect defines a robust notion of the progress of time.

\n

The Role of the Observer

\n

That time is a reflection of the progress of computation in the universe is an important starting point. But it’s not the end of the story. For example, here’s an immediate issue. If we have a computational rule that determines each successive state of a system it’s at least in principle possible to know the whole future of the system. So given this why then do we have the experience of the future only “unfolding as it happens”?

\n

It’s fundamentally because of the way we are as observers. If the underlying system is computationally irreducible, then to work out its future behavior requires an irreducible amount of computational work. But it’s a core feature of observers like us that we are computationally bounded. So we can’t do all that irreducible computational work to “know the whole future”—and instead we’re effectively stuck just doing computation alongside the system itself, never able to substantially “jump ahead”, and only able to see the future “progressively unfold”.

\n

In essence, therefore, we experience time because of the interplay between our computational boundedness as observers, and the computational irreducibility of underlying processes in the universe. If we were not computationally bounded, we could “perceive the whole of the future in one gulp” and we wouldn’t need a notion of time at all. And if there wasn’t underlying computational irreducibility there wouldn’t be the kind of “progressive revealing of the future” that we associate with our experience of time.

\n

A notable feature of our everyday perception of time is that it seems to “flow only in one direction”—so that for example it’s generally much easier to remember the past than to predict the future. And this is closely related to the Second Law of thermodynamics, which (as I’ve argued at length elsewhere) is once again a result of the interplay between underlying computational irreducibility and our computational boundedness. Yes, the microscopic laws of physics may be reversible (and indeed if our system is simple—and computationally reducible—enough of this reversibility may “shine through”). But the point is that computational irreducibility is in a sense a much stronger force.

\n

Imagine that we prepare a state to have orderly structure. If its evolution is computationally irreducible then this structure will effectively be “encrypted” to the point where a computationally bounded observer can’t recognize the structure. Given underlying reversibility, the structure is in some sense inevitably “still there”—but it can’t be “accessed” by a computationally bounded observer. And as a result such an observer will perceive a definite flow from orderliness in what is prepared to disorderliness in what is observed. (In principle one might think it should be possible to set up a state that will “behave antithermodynamically”—but the point is that to do so would require predicting a computationally irreducible process, which a computationally bounded observer can’t do.)

\n

One of the longstanding confusions about the nature of time has to do with its “mathematical similarity” to space. And indeed ever since the early days of relativity theory it’s seemed convenient to talk about “spacetime” in which notions of space and time are bundled together.

\n

But in our Physics Project that’s not at all how things fundamentally work. At the lowest level the state of the universe is represented by a hypergraph which captures what can be thought of as the “spatial relations” between discrete “atoms of space”. Time then corresponds to the progressive rewriting of this hypergraph.

\n

And in a sense the “atoms of time” are the elementary “rewriting events” that occur. If the “output” from one event is needed to provide “input” to another, then we can think of the first event as preceding the second event in time—and the events as being “timelike separated”. And in general we can construct a causal graph that shows the dependencies between different events.

\n

So how does this relate to time—and spacetime? As we’ll discuss below, our everyday experience of time is that it follows a single thread. And so we tend to want to “parse” the causal graph of elementary events into a series of slices that we can view as corresponding to “successive times”. As in standard relativity theory, there typically isn’t a unique way to assign a sequence of such “simultaneity surfaces”, with the result that there are different “reference frames” in which the identifications of space and time are different.

\n

The complete causal graph bundles together what we usually think of as space with what we usually think of as time. But ultimately the progress of time is always associated with some choice of successive events that “computationally build on each other”. And, yes, it’s more complicated because of the possibilities of different choices. But the basic idea of the progress of time as “the doing of computation” is very much the same. (In a sense time represents “computational progress” in the universe, while space represents the “layout of its data structure”.)

\n

Very much as in the derivation of the Second Law (or of fluid mechanics from molecular dynamics), the derivation of Einstein’s equations for the large-scale behavior of spacetime from the underlying causal graph of hypergraph rewriting depends on the fact that we are computationally bounded observers. But even though we’re computationally bounded, we still have to “have something going on inside”, or we wouldn’t record—or sense—any “progress in time”.

\n

It seems to be the essence of observers like us—as captured in my recent Observer Theory—that we equivalence many different states of the world to derive our internal perception of “what’s going on outside”. And at some rough level we might imagine that we’re sensing time passing by the rate at which we add to those internal perceptions. If we’re not adding to the perceptions, then in effect time will stop for us—as happens if we’re asleep, anesthetized or dead.

\n

It’s worth mentioning that in some extreme situations it’s not the internal structure of the observer that makes perceived time stop; instead it’s the underlying structure of the universe itself. As we’ve mentioned, the “progress of the universe” is associated with successive rewriting of the underlying hypergraph. But when there’s been “too much activity in the hypergraph” (which physically corresponds roughly to too much energy-momentum), one can end up with a situation in which “there are no more rewrites that can be done”—so that in effect some part of the universe can no longer progress, and “time stops” there. It’s analogous to what happens at a spacelike singularity (normally associated with a black hole) in traditional general relativity. But now it has a very direct computational interpretation: one’s reached a “fixed point” at which there’s no more computation to do. And so there’s no progress to make in time.

\n

Multiple Threads of Time

\n

Our strong human experience is that time progresses as a single thread. But now our Physics Project suggests that at an underlying level time is actually in effect multithreaded, or, in other words, that there are many different “paths of history” that the universe follows. And it is only because of the way we as observers sample things that we experience time as a single thread.

\n

At the level of a particular underlying hypergraph the point is that there may be many different updating events that can occur, and each sequence of such updating event defines a different “path of history”. We can summarize all these paths of history in a multiway graph in which we merge identical states that arise:

\n
\n
\n

\n

But given this underlying structure, why is it that we as observers believe that time progresses as a single thread? It all has to do with the notion of branchial space, and our presence within branchial space. The presence of many paths of history is what leads to quantum mechanics; the fact that we as observers ultimately perceive just one path is associated with the traditionally-quite-mysterious phenomenon of “measurement” in quantum mechanics.

\n

When we talked about causal graphs above, we said that we could “parse” them as a series of “spacelike” slices corresponding to instantaneous “states of space”—represented by spatial hypergraphs. And by analogy we can similarly imagine breaking multiway graphs into “instantaneous slices”. But now these slices don’t represent states of ordinary space; instead they represent states of what we call branchial space.

\n

Ordinary space is “knitted together” by updating events that have causal effects on other events that can be thought of as “located at different places in space”. (Or, said differently, space is knitted together by the overlaps of the elementary light cones of different events.) Now we can think of branchial space as being “knitted together” by updating events that have effects on events that end up on different branches of history.

\n

(In general there is a close analogy between ordinary space and branchial space, and we can define a multiway causal graph that includes both “spacelike” and “branchlike” directions—with the branchlike direction supporting not light cones but what we can call entanglement cones.)

\n

So how do we as observers parse what’s going on? A key point is that we are inevitably part of the system we’re observing. So the branching (and merging) that’s going on in the system at large is also going on in us. So that means we have to ask how a “branching mind” will perceive a branching universe. Underneath, there are lots of branches, and lots of “threads of history”. And there’s lots of computational irreducibility (and even what we can call multicomputational irreducibility). But computationally bounded observers like us have to equivalence most of those details to wind up with something that “fits in our finite minds”.

\n

We can make an analogy to what happens in a gas. Underneath, there are lots of molecules bouncing around (and behaving in computationally irreducible ways). But observers like us are big compared to molecules, and (being computationally bounded) we don’t get to perceive their individual behavior, but only their aggregate behavior—from which we extract a thin set of computationally reducible “fluid-dynamics-level” features.

\n

And it’s basically the same story with the underlying structure of space. Underneath, there’s an elaborately changing network of discrete atoms of space. But as large, computationally bounded observers we can only sample aggregate features in which many details have been equivalenced, and in which space tends to seem continuous and describable in basically computationally reducible ways.

\n

So what about branchial space? Well, it’s basically the same story. Our minds are “big”, in the sense that they span many individual branches of history. And they’re computationally bounded so they can’t perceive the details of all those branches, but only certain aggregated features. And in a first approximation what then emerges is in effect a single aggregated thread of history.

\n

With sufficiently careful measurements we can sometimes see “quantum effects” in which multiple threads of history are in evidence. But at a direct human level we always seem to aggregate things to the point where what we perceive is just a single thread of history—or in effect a single thread of progression in time.

\n

It’s not immediately obvious that any of these “aggregations” will work. It could be that important effects we perceive in gases would depend on phenomena at the level of individual molecules. Or that to understand the large-scale structure of space we’d continually be having to think about detailed features of atoms of space. Or, similarly, that we’d never be able to maintain a “consistent view of history”, and that instead we’d always be having to trace lots of individual threads of history.

\n

But the key point is that for us to stay as computationally bounded observers we have to pick out only features that are computationally reducible—or in effect boundedly simple to describe.

\n

Closely related to our computational boundedness is the important assumption we make that we as observers have a certain persistence. At every moment in time, we are made from different atoms of space and different branches in the multiway graph. Yet we believe we are still “the same us”. And the crucial physical fact (that has to be derived in our model) is that in ordinary circumstances there’s no inconsistency in doing this.

\n

So the result is that even though there are many “threads of time” at the lowest level—representing many different “quantum branches”—observers like us can (usually) successfully still view there as being a single consistent perceived thread of time.

\n

But there’s another issue here. It’s one thing to say that a single observer (say a single human mind or a single measuring device) can perceive history to follow a single, consistent thread. But what about different human minds, or different measuring devices? Why should they perceive any kind of consistent “objective reality”?

\n

Essentially the answer, I think, is that they’re all sufficiently nearby in branchial space. If we think about physical space, observers in different parts of the universe will clearly “see different things happening”. The “laws of physics” may be the same—but what star (if any) is nearby will be different. Yet (at least for the foreseeable future) for all of us humans it’s always the same star that’s nearby.

\n

And so it is, presumably, in branchial space. There’s some small patch in which we humans—with our shared origins—exist. And it’s presumably because that patch is small relative to all of branchial space that all of us perceive a consistent thread of history and a common objective reality.

\n

There are many subtleties to this, many of which aren’t yet fully worked out. In physical space, we know that effects can in principle spread at the speed of light. And in branchial space the analog is that effects can spread at the maximum entanglement speed (whose value we don’t know, though it’s related by Planck unit conversions to the elementary length and elementary time). But in maintaining our shared “objective” view of the universe it’s crucial that we’re not all going off in different directions at the speed of light. And of course the reason that doesn’t happen is that we don’t have zero mass. And indeed presumably nonzero mass is a critical part of being observers like us.

\n

In our Physics Project it’s roughly the density of events in the hypergraph that determines the density of energy (and mass) in physical space (with their associated gravitational effects). And similarly it’s roughly the density of events in the multiway graph (or in branchial graph slices) that determines the density of action—the relativistically invariant analog of energy—in branchial space (with its associated effects on quantum phase). And though it’s not yet completely clear how this works, it seems likely that once again when there’s mass, effects don’t just “go off at the maximum entanglement speed in all directions”, but instead stay nearby.

\n

There are definitely connections between “staying at the same place”, believing one is persistent, and being computationally bounded. But these are what seem necessary for us to have our typical view of time as a single thread. In principle we can imagine observers very different from us—say with minds (like the inside of an idealized quantum computer) capable of experiencing many different threads of history. But the Principle of Computational Equivalence suggests that there’s a high bar for such observers. They need not only to be able to deal with computational irreducibility but also multicomputational irreducibility, in which one includes both the process of computing new states, and the process of equivalencing states.

\n

And so for observers that are “anything like us” we can expect that once again time will tend to be as we normally experience it, following a single thread, consistent between observers.

\n

(It’s worth mentioning that all of this only works for observers like us “in situations like ours”. For example, at the “entanglement horizon” for a black hole—where branchially-oriented edges in the multiway causal graph get “trapped”—time as we know it in some sense “disintegrates”, because an observer won’t be able to “knit together” the different branches of history to “form a consistent classical thought” about what happens.)

\n

Time in the Ruliad

\n

In what we’ve discussed so far we can think of the progress of time as being associated with the repeated application of rules that progressively “rewrite the state of the universe”. In the previous section we saw that these rules can be applied in many different ways, leading to many different underlying threads of history.

\n

But so far we’ve imagined that the rules that get applied are always the same—leaving us with the mystery of “Why those rules, and not others?” But this is where the ruliad comes in. Because the ruliad involves no such seemingly arbitrary choices: it’s what you get by following all possible computational rules.

\n

One can imagine many bases for the ruliad. One can make it from all possible hypergraph rewritings. Or all possible (multiway) Turing machines. But in the end it’s a single, unique thing: the entangled limit of all possible computational processes. There’s a sense in which “everything can happen somewhere” in the ruliad. But what gives the ruliad structure is that there’s a definite (essentially geometrical) way in which all those different things that can happen are arranged and connected.

\n

So what is our perception of the ruliad? Inevitably we’re part of the ruliad—so we’re observing it “from the inside”. But the crucial point is that what we perceive about it depends on what we are like as observers. And my big surprise in the past few years has been that assuming even just a little about what we’re like as observers immediately implies that what we perceive of the ruliad follows the core laws of physics we know. In other words, by assuming what we’re like as observers, we can in effect derive our laws of physics.

\n

The key to all this is the interplay between the computational irreducibility of underlying behavior in the ruliad, and our computational boundedness as observers (together with our related assumption of our persistence). And it’s this interplay that gives us the Second Law in statistical mechanics, the Einstein equations for the structure of spacetime, and (we think) the path integral in quantum mechanics. In effect what’s happening is that our computational boundedness as observers makes us equivalence things to the point where we are sampling only computationally reducible slices of the ruliad, whose characteristics can be described using recognizable laws of physics.

\n

So where does time fit into all of this? A central feature of the ruliad is that it’s unique—and everything about it is “abstractly necessary”. Much as given the definition of numbers, addition and equality it’s inevitable that one gets 1 + 1 = 2, so similarly given the definition of computation it’s inevitable that one gets the ruliad. Or, in other words, there’s no question about whether the ruliad exists; it’s just an abstract construct that inevitably follows from abstract definitions.

\n

And so at some level this means that the ruliad inevitably just “exists as a complete thing”. And so if one could “view it from outside” one could think of it as just a single timeless object, with no notion of time.

\n

But the crucial point is that we don’t get to “view it from the outside”. We’re embedded within it. And, what’s more, we must view it through the “lens” of our computational boundedness. And this is why we inevitably end up with a notion of time.

\n

We observe the ruliad from some point within it. If we were not computationally bounded then we could immediately compute what the whole ruliad is like. But in actuality we can only discover the ruliad “one computationally bounded step at a time”—in effect progressively applying bounded computations to “move through rulial space”.

\n

So even though in some abstract sense “the whole ruliad is already there” we only get to explore it step by step. And that’s what gives us our notion of time, through which we “progress”.

\n

Inevitably, there are many different paths that we could follow through the ruliad. And indeed every mind (and every observer like us)—with its distinct inner experience—presumably follows a different path. But much as we described for branchial space, the reason we have a shared notion of “objective reality” is presumably that we are all very close together in rulial space; we form in a sense a tight “rulial flock”.

\n

It’s worth pointing out that not every sampling of the ruliad that may be accessible to us conveniently corresponds to exploration of progressive slices of time. Yes, that kind of “progression in time” is characteristic of our physical experience, and our typical way of describing it. But what about our experience, say, of mathematics?

\n

The first point to make is that just as the ruliad contains all possible physics, it also contains all possible mathematics. If we construct the ruliad, say from hypergraphs, the nodes are now not “atoms of space”, but instead abstract elements (that in general we call emes) that form pieces of mathematical expressions and mathematical theorems. We can think of these abstract elements as being laid out now not in physical space, but in some abstract metamathematical space.

\n

In our physical experience, we tend to remain localized in physical space, branchial space, etc. But in “doing mathematics” it’s more as if we’re progressively expanding in metamathematical space, carving out some domain of “theorems we assume are true”. And while we could identify some kind of “path of expansion” to let us define some analog of time, it’s not a necessary feature of the way we explore the ruliad.

\n

Different places in the ruliad in a sense correspond to describing things using different rules. And by analogy to the concept of motion in physical space, we can effectively “move” from one place to another in the ruliad by translating the computations done by one set of rules to computations done by another. (And, yes, it’s nontrivial to even have the possibility of “pure motion”.) But if we indeed remain localized in the ruliad (and can maintain what we can think of as our “coherent identity”) then it’s natural to think of there being a “path of motion” along which we progress “with time”. But when we’re just “expanding our horizons” to encompass more paradigms and to bring more of rulial space into what’s covered by our minds (so that in effect we’re “expanding in rulial space”), it’s not really the same story. We’re not thinking of ourselves as “doing computation in order to move”. Instead, we’re just identifying equivalences and using them to expand our definition of ourselves, which is something that we can at least approximate (much like in “quantum measurement” in traditional physics) as happening “outside of time”. Ultimately, though, everything that happens must be the result of computations that occur. It’s just that we don’t usually “package” these into what we can describe as a definite thread of time.

\n

So What in the End Is Time?

\n

From the paradigm (and Physics Project ideas) that we’ve discussed here, the question “What is time?” is at some level simple: time is what progresses when one applies computational rules. But what’s critical is that time can in effect be defined abstractly, independent of the details of those rules, or the “substrate” to which they’re applied. And what makes this possible is the Principle of Computational Equivalence, and the ubiquitous phenomenon of computational irreducibility that it implies.

\n

To begin with, the fact that time can robustly be thought of as “progressing”, in effect in a linear chain, is a consequence of computational irreducibility—because computational irreducibility is what tells us that computationally bounded observers like us can’t in general ever “jump ahead”; we just have to follow a linear chain of steps.

\n

But there’s something else as well. The Principle of Computational Equivalence implies that there’s in a sense just one (ubiquitous) kind of computational irreducibility. So when we look at different systems following different irreducible computational rules, there’s inevitably a certain universality to what they do. In effect they’re all “accumulating computational effects” in the same way. Or in essence progressing through time in the same way.

\n

There’s a close analogy here with heat. It could be that there’d be detailed molecular motion that even on a large scale worked noticeably differently in different materials. But the fact is that we end up being able to characterize any such motion just by saying that it represents a certain amount of heat, without getting into more details. And that’s very much the same kind of thing as being able to say that such-and-such an amount of time has passed, without having to get into the details of how some clock or other system that reflects the passage of time actually works.

\n

And in fact there’s more than a “conceptual analogy” here. Because the phenomenon of heat is again a consequence of computational irreducibility. And the fact that there’s a uniform, “abstract” characterization of it is a consequence of the universality of computational irreducibility.

\n

It’s worth emphasizing again, though, that just as with heat, a robust concept of time depends on us being computationally bounded observers. If we were not, then we’d able to break the Second Law by doing detailed computations of molecular processes, and we wouldn’t just describe things in terms of randomness and heat. And similarly, we’d be able to break the linear flow of time, either jumping ahead or following different threads of time.

\n

But as computationally bounded observers of computationally irreducible processes, it’s basically inevitable that—at least to a good approximation—we’ll view time as something that forms a single one-dimensional thread.

\n

In traditional mathematically based science there’s often a feeling that the goal should be to “predict the future”—or in effect to “outrun time”. But computational irreducibility tells us that in general we can’t do this, and that the only way to find out what will happen is just to run the same computation as the system itself, essentially step by step. But while this might seem like a letdown for the power of science, we can also see it as what gives meaning and significance to time. If we could always jump ahead then at some level nothing would ever fundamentally be achieved by the passage of time (or, say, by the living of our lives); we’d always be able to just say what will happen, without “living through” how we got there. But computational irreducibility gives time and the process of it passing a kind of hard, tangible character.

\n

So what does all this imply for the various classic issues (and apparent paradoxes) that arise in the way time is usually discussed?

\n

Let’s start with the question of reversibility. The traditional laws of physics basically apply both forwards and backwards in time. And the ruliad is inevitably symmetrical between “forward” and “backward” rules. So why is it then that in our typical experience time always seems to “run in the same direction”?

\n

This is closely related to the Second Law, and once again it’s consequence of our computational boundedness interacting with underlying computational irreducibility. In a sense what defines the direction of time for us is that we (typically) find it much easier to remember the past than to predict the future. Of course, we don’t remember every detail of the past. We only remember what amounts to certain “filtered” features that “fit in our finite minds”. And when it comes to predicting the future, we’re limited by our inability to “outrun” computational irreducibility.

\n

Let’s recall how the Second Law works. It basically says that if we set up some state that’s “ordered” or “simple” then this will tend to “degrade” to one that’s “disordered” or “random”. (We can think of the evolution of the system as effectively “encrypting” the specification of our starting state to the point where we—as computationally bounded observers—can no longer recognize its ordered origins.) But because our underlying laws are reversible, this degradation (or “encryption”) must happen when we go both forwards and backwards in time:

\n
\n
\n

\n

But the point is that our “experiential” definition of the direction of time (in which the “past” is what we remember, and the “future” is what we find hard to predict) is inevitably aligned with the “thermodynamic” direction of time we observe in the world at large. And the reason is that in both cases we’re defining the past to be something that’s computationally bounded (while the future can be computationally irreducible). In the experiential case the past is computationally bounded because that’s what we can remember. In the thermodynamic case it’s computationally bounded because those are the states we can prepare. In other words, the “arrows of time” are aligned because in both cases we are in effect “requiring the past to be simpler”.

\n

So what about time travel? It’s a concept that seems natural—and perhaps even inevitable—if one imagines that “time is just like space”. But it becomes a lot less natural when we think of time in the way we’re doing here: as a process of applying computational rules.

\n

Indeed, at the lowest level, these rules are by definition just sequentially applied, producing one state after another—and in effect “progressing in one direction through time”. But things get more complicated if we consider not just the raw, lowest-level rules, but what we might actually observe of their effects. For example, what if the rules lead to a state that’s identical to one they’ve produced before (as happens, for example, in a system with periodic behavior)? If we equivalence the state now and the state before (so we represent both as a single state) then we can end up with a loop in our causal graph (a “closed timelike curve”). And, yes, in terms of the raw sequence of applying rules these states can be considered different. But the point is that if they are identical in every feature then any observer will inevitably consider them the same.

\n

But will such equivalent states ever actually occur? As soon as there’s computational irreducibility it’s basically inevitable that the states will never perfectly match up. And indeed for the states to contain an observer like us (with “memory”, etc.) it’s basically impossible that they can match up.

\n

But can one imagine an observer (or a “timecraft”) that would lead to states that match up? Perhaps somehow it could carefully pick particular sequences of atoms of space (or elementary events) that would lead it to states that have “happened before”. And indeed in a computationally simple system this might be possible. But as soon as there’s computational irreducibility, this simply isn’t something one can expect any computationally bounded observer to be able to do. And, yes, this is directly analogous to why one can’t have a “Maxwell’s demon” observer that “breaks the Second Law”. Or why one can’t have something that carefully navigates the lowest-level structure of space to effectively travel faster than light.

\n

But even if there can’t be time travel in which “time for an observer goes backwards”, there can still be changes in “perceived time”, say as a result of relativistic effects associated with motion. For example, one classic relativistic effect is time dilation, in which “time goes slower” when objects go faster. And, yes, given certain assumptions, there’s a straightforward mathematical derivation of this effect. But in our effort to understand the nature of time we’re led to ask what its physical mechanism might be. And it turns out that in our Physics Project it has a surprisingly direct—and almost “mechanical”—explanation.

\n

One starts from the fact that in our Physics Project space and everything in it is represented by a hypergraph which is continually getting rewritten. And the evolution of any object through time is then defined by these rewritings. But if the object moves, then in effect it has to be “re-created at a different place in space”—and this process takes up a certain number of rewritings, leaving fewer for the intrinsic evolution of the object itself, and thus causing time to effectively “run slower” for it. (And, yes, while this is a qualitative description, one can make it quite formal and precise, and recover the usual formulas for relativistic time dilation.)

\n

Something similar happens with gravitational fields. In our Physics Project, energy-momentum (and thus gravity) is effectively associated with greater activity in the underlying hypergraph. And the presence of this greater activity leads to more rewritings, causing “time to run faster” for any object in that region of space (corresponding to the traditional “gravitational redshift”).

\n

More extreme versions of this occur in the context of black holes. (Indeed, one can roughly think of spacelike singularities as places where “time ran so fast that it ended”.) And in general—as we discussed above—there are many “relativistic effects” in which notions of space and time get mixed in various ways.

\n

But even at a much more mundane level there’s a certain crucial relationship between space and time for observers like us. The key point is that observers like us tend to “parse” the world into a sequence of “states of space” at successive “moments in time”. But the fact that we do this depends on some quite specific features of us, and in particular our effective physical scale in space as compared to time.

\n

In our everyday life we’re typically looking at scenes involving objects that are perhaps tens of meters away from us. And given the speed of light that means photons from these objects get to us in less than a microsecond. But it takes our brains milliseconds to register what we’ve seen. And this disparity of timescales is what leads us to view the world as consisting of a sequence of states of space at successive moments in time.

\n

If our brains “ran” a million times faster (i.e. at the speed of digital electronics) we’d perceive photons arriving from different parts of a scene at different times, and we’d presumably no longer view the world in terms of overall states of space existing at successive times.

\n

The same kind of thing would happen if we kept the speed of our brains the same, but dealt with scenes of a much larger scale (as we already do in dealing with spacecraft, astronomy, etc.).

\n

But while this affects what it is that we think time is “acting on”, it doesn’t ultimately affect the nature of time itself. Time remains that computational process by which successive states of the world are produced. Computational irreducibility gives time a certain rigid character, at least for computationally bounded observers like us. And the Principle of Computational Equivalence allows there to be a robust notion of time independent of the “substrate” that’s involved: whether us as observers, the everyday physical world, or, for that matter, the whole universe.

\n", + "category": "Philosophy", + "link": "https://writings.stephenwolfram.com/2024/10/on-the-nature-of-time/", + "creator": "Stephen Wolfram", + "pubDate": "Tue, 08 Oct 2024 21:41:58 +0000", + "enclosure": "", + "enclosureType": "", + "image": "", + "id": "", + "language": "en", + "folder": "", + "feed": "wolfram", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "c94c66d5dc78641ceff4146b0baf3c50", + "highlights": [] + }, + { + "title": "Nestedly Recursive Functions", + "description": "\"\"Yet Another Ruliological Surprise Integers. Addition. Subtraction. Maybe multiplication. Surely that’s not enough to be able to generate any serious complexity. In the early 1980s I had made the very surprising discovery that very simple programs based on cellular automata could generate great complexity. But how widespread was this phenomenon? At the beginning of the […]", + "content": "\"\"

\"Nestedly

\n

Yet Another Ruliological Surprise

\n

Integers. Addition. Subtraction. Maybe multiplication. Surely that’s not enough to be able to generate any serious complexity. In the early 1980s I had made the very surprising discovery that very simple programs based on cellular automata could generate great complexity. But how widespread was this phenomenon?

\n

At the beginning of the 1990s I had set about exploring this. Over and over I would consider some type of system and be sure it was too simple to “do anything interesting”. And over and over again I would be wrong. And so it was that on the night of August 13, 1993, I thought I should check what could happen with integer functions defined using just addition and subtraction.

\n

I knew, of course, about defining functions by recursion, like Fibonacci:

\n
\n
\n

\n

But could I find something like this that would have complex behavior? I did the analog of what I have done so many times, and just started (symbolically) enumerating possible definitions. And immediately I saw cases with nested functions, like:

\n
\n
\n

\n

(For some reason I wanted to keep the same initial conditions as Fibonacci: f[1] = f[2] = 1.) What would functions like this do? My original notebook records the result in this case:

\n

Nestedly recursive function

\n

But a few minutes later I found something very different: a simple nestedly recursive function with what seemed like highly complex behavior:

\n

Simple nestedly recursive function with complex behavior

\n

I remembered seeing a somewhat similarly defined function discussed before. But the behavior I’d seen reported for that function, while intricate, was nested and ultimately highly regular. And, so far as I could tell, much like with rule 30 and all the other systems I’d investigated, nobody had ever seen serious complexity in simple recursive functions.

\n

It was a nice example. But it was one among many. And when I published A New Kind of Science in 2002, I devoted just four pages (and 7 notes) to “recursive sequences”—even though the gallery I made of their behavior became a favorite page of mine:

\n

Recursive sequences gallery

\n

A year after the book was published we held our first Wolfram Summer School, and as an opening event I decided to do a live computer experiment—in which I would try to make a real-time science discovery. The subject I chose was nestedly recursive functions. It took a few hours. But then, yes, we made a discovery! We found that there was a nestedly recursive function simpler than the ones I’d discussed in A New Kind of Science that already seemed to have very complex behavior:

\n
\n
\n

\n
\n
\n

\n

Over the couple of decades that followed I returned many times to nestedly recursive functions—particularly in explorations I did with high school and other students, or in suggestions I made for student projects. Then recently I used them several times as “intuition-building examples” in various investigations.

\n

I’d always felt my work with nestedly recursive functions was unfinished. Beginning about five years ago—particularly energized by our Physics Project—I started looking at harvesting seeds I’d sown in A New Kind of Science and before. I’ve been on quite a roll, with a few pages or even footnotes repeatedly flowering into rich book-length stories. And finally—particularly after my work last year on “Expression Evaluation and Fundamental Physics”—I decided it was time to try to finish my exploration of nestedly recursive functions.

\n

Our modern Wolfram Language tools—as well as ideas from our Physics Project—provided some new directions to explore. But I still thought I pretty much knew what we’d find. And perhaps after all these years I should have known better. Because somehow in the computational universe—and in the world of ruliology—there are always surprises.

\n

And here, yet again, there was indeed quite a surprise.

\n

The Basic Idea

\n

Consider the definition (later we’ll call this “P312”)

\n
\n
\n

\n

which we can also write as:

\n
\n
\n

\n

The first few values for f[n] generated from this definition are:

\n
\n
\n

\n

Continuing further we get:

\n
\n
\n

\n

But how are these values actually computed? To see that we can make an “evaluation graph” in which we show how each value of f[n] is computed from ones with smaller values of n, here starting from f[20]:

\n
\n
\n

\n

The gray nodes represent initial conditions: places where f[n] was sampled for n ≤ 0. The two different colors of edges correspond to the two different computations done in evaluating each f[n]:

\n
\n
\n

\n

Continuing to f[30] we get:

\n
\n
\n

\n

But what’s the structure of this graph? If we pull out the “red” graph on its own, we can see that it breaks into two path graphs, that consist of the sequences of the f[n] for odd and even n, respectively:

\n
\n
\n

\n

The “blue” graph, on the other hand, breaks into four components—each always a tree—leading respectively to the four different initial conditions:

\n
\n
\n

\n

And for example we can now plot f[n], showing which tree each f[n] ends up being associated with:

\n
\n
\n

\n

We’ll be using this same basic setup throughout, though for different functions. We’ll mostly consider recursive definitions with a single term (i.e. with a single “outermost f”, not two, as in Fibonacci recurrences).

\n

The specific families of recursive functions we’ll be focusing on are:

\n
\n
\n

\n

And with this designation, the function we just introduced is P312.

\n

A Closer Look at P312 ( f[n_] := 3 + f[n – f[n – 2]] )

\n

Let’s start off by looking in more detail at the function we just introduced. Here’s what it does up to n = 500:

\n
\n
\n

\n

It might seem as if it’s going to go on “seemingly randomly” forever. But if we take it further, we get a surprise: it seems to “resolve itself” to something potentially simpler:

\n
\n
\n

\n

What’s going on? Let’s plot this again, but now showing which “blue graph tree” each value is associated with:

\n
\n
\n

\n

And now what we see is that the f[–3] and f[–2] trees stop contributing to f[n] when n is (respectively) 537 and 296, and these trees are finite (and have sizes 53 and 15):

\n
\n
\n

\n

The overall structures of the “remaining” trees—here shown up to f[5000]—eventually start to exhibit some regularity:

\n
\n
\n

\n

We can home in on this regularity by arranging these trees in layers, starting from the root, then plotting the number of nodes in each successive layer:

\n
\n
\n

\n

Looking at these pictures suggests that there should be some kind of more-or-less direct “formula” for f[n], at least for large n. They also suggest that such a formula should have some kind of mod-6 structure. And, yes, there does turn out to be essentially a “formula”. Though the “formula” is quite complicated—and reminiscent of several other “strangely messy” formulas in other ruliological cases—like Turing machine 600720 discussed in A New Kind of Science or combinator s[s[s]][s][s][s][s].

\n

Later on, we’ll see the much simpler recursive function P111 (f[n_] := 1 + f[nf[n 1]]). The values for this function form a sequence in which successive blocks of length k have value k:

\n
\n
\n

\n

P312 has the same kind of structure, but much embellished. First, it has 6 separate riffled (“mod”) subsequences. Each subsequence then consists of a sequence of blocks. Given a value n, this computes which subsequence this is on, which block for that subsequence it’s in, and where it is within that block:

\n
\n
\n

\n

So, for example, here are results for multiples of 1000:

\n
\n
\n

\n

For n = 1000 we’re not yet in the “simple” regime, we can’t describe the sequence in any simple way, and our “indices” calculation is meaningless. For n = 2000 it so happens that we are at block 0 for the mod-1 subsequence. And the way things are set up, we just start by giving exactly the form of block 0 for each mod. So for mod 1 the block is:

\n
\n
\n

\n

But now n = 2000 has offset 16 within this block, so the final value of f[2000] is simply the 16th value from this list, or 100. f[2001] is then simply the next element within this block, or 109. And so on—until we reach the end of the block.

\n

But what if we’re not dealing with block 0? For example, according to the table above, f[3000] is determined by mod-3 block 1. It turns out there’s a straightforward, if messy, way to compute any block b (for mod m):

\n
\n
\n

\n

So now we have a way to compute the value, say of f[3000], effectively just by “evaluating a formula”:

\n
\n
\n

\n

And what’s notable is that this evaluation doesn’t involve any recursion. In other words, at the cost of “messiness” we’ve—somewhat surprisingly—been able to unravel all the recursion in P312 to arrive at a “direct formula” for the value of f[n] for any n.

\n

So what else can we see about the behavior of f[n] for P312? One notable feature is its overall growth rate. For large n, it turns out that (as can be seen by substituting this form into the recursive definition and taking a limit):

\n
\n
\n

\n
\n
\n

\n

One thing this means is that our evaluation graph eventually has a roughly conical form:

\n
\n
\n

\n

This can be compared to the very regular cone generated by P111 (which has asymptotic value ):

\n
\n
\n

\n

If one just looks at the form of the recursive definition for P312 it’s far from obvious “how far back” it will need to probe, or, in other words, what values of f[n] one will need to specify as initial conditions. As it turns out, though, the only values needed are f[–3], f[–2], f[–1] and f[0].

\n

How can one see this? In 3 + f[nf[n – 2]] it’s only the outer f that can probe “far back” values. But how far it actually goes back depends on how much larger f[n – 2] gets compared to n. Plotting f[n – 2] and n together we have:

\n
\n
\n

\n

And the point is that only for very few values of n does f[n – 2] exceed n—and it’s these values that probe back. Meanwhile, for larger n, there can never be additional “lookbacks”, because f[n] only grows like .

\n

So does any P312 recursion always have the same lookback? So far, we’ve considered specifically the initial condition f[n] = 1 for all n ≤ 0. But what if we change the value of f[0]? Here are plots of f[n] for different cases:

\n
\n
\n

\n

And it turns out that with f[0] = z, the lookback goes to –z for z ≥ 3, and to z – 4 for 1 ≤ z ≤ 2.

\n

(If z ≤ 0 the function f[n] is basically not defined, because the recursion is trying to compute f[n] from f[n], f[n + 1], etc., so never “makes progress”.)

\n

The case f[0] = 2 (i.e. z = 2) is the one that involves the least lookback—and a total of 3 initial values. Here is the evaluation graph in this case:

\n
\n
\n

\n

By comparison, here is the evaluation graph for the case f[0] = 5, involving 6 initial values:

\n
\n
\n

\n

If we plot the value of f[n] as a function of f[0] we get the following:

\n
\n
\n

\n

For n < 3 f[0], f[n] always has simple behavior, and is essentially periodic in n with period 3:

\n
\n
\n

\n

And it turns out that for any specified initial configuration of values, there is always only bounded lookback—with the bound apparently being determined by the largest of the initial values f[ninit].

\n

So what about the behavior of f[n] for large n? Just like in our original f[0] = 1 case, we can construct “blue graph trees” rooted at each of the initial conditions. In the case f[0] = 1 we found that of the 4 trees only two continue to grow as n increases. As we vary f[0], the number of “surviving trees” varies quite erratically:

\n
\n
\n

\n

What if instead of just changing f[0], and keeping all other f[–k] = 1, we set f[n] = s for all n ≤ 0? The result is somewhat surprising:

\n
\n
\n

\n

For s ≥ 2, the behavior turns out to be simple—and similar to the behavior of P111.

\n

So what can P312 be made to do if we change its initial conditions? With f[n] = 2 for n < 0, we see that for small f[0] the behavior remains “tame”, but as f[0] increases it starts showing its typical complexity:

\n
\n
\n

\n

One question to ask is what set of values f[n] takes on. Given that the initial values have certain residues mod 3, all subsequent values must have the same residues. But apart from this constraint, it seems that all values for f[n] are obtained—which is not surprising given that f[n] grows only like .

\n

The “P Family”: f[n_] := a + f[n – b f[n – c]]

\n

P312 is just one example of the “P family” of sequences defined by:

\n
\n
\n

\n

Here is the behavior of some other Pabc sequences:

\n
\n
\n

\n

And here are their evaluation graphs:

\n
\n
\n

\n

P312 is the first “seriously complex” example.

\n

P111 (as mentioned earlier) has a particularly simple form

\n
\n
\n

\n
\n
\n

\n

which corresponds to the simple formula:

\n
\n
\n

\n

The evaluation graph in this case is just:

\n
\n
\n

\n

Only a single initial condition f[0] = 1 is used, and there is only a single “blue graph tree” with a simple form:

\n
\n
\n

\n

Another interesting case is P123:

\n
\n
\n

\n
\n
\n

\n

Picking out only odd values of n we get:

\n
\n
\n

\n

This might look just like the behavior of P111. But it’s not. The lengths of the successive “plateaus” are now

\n
\n
\n

\n

with differences:

\n
\n
\n

\n

But this turns out to be exactly a nested sequence generated by joining together the successive steps in the evolution of the substitution system:

\n
\n
\n

\n
\n
\n

\n

P123 immediately “gets into its final behavior”, even for small n. But—as we saw rather dramatically with P312—there can be “transient behavior” that doesn’t “resolve” until n is large. A smaller case of this phenomenon occurs with P213. Above n = 68 it shows a simple “square root” pattern of behavior, basically like P111. But for smaller n it’s a bit more complicated:

\n
\n
\n

\n

And in this case the transients aren’t due to “blue graph trees” that stop growing. Instead, there are only two trees (associated with f[0] and f[–1]), but both of them soon end up growing in very regular ways:

\n
\n
\n

\n

The “T Family”: f[n_] := a f[n – b f[n – c]]

\n

What happens if our outermost operation is not addition, but multiplication?

\n
\n
\n

\n

Here are some examples of the behavior one gets. In each case we’re plotting on a log scale—and we’re not including T1xx cases, which are always trivial:

\n
\n
\n

\n

We see that some sequences have regular and readily predictable behavior, but others do not. And this is reflected in the evaluation graphs for these functions:

\n
\n
\n

\n

The first “complicated case” is T212:

\n
\n
\n

\n
\n
\n

\n

The evaluation graph for f[50] in this case has the form:

\n
\n
\n

\n

And something that’s immediately notable is that in addition to “looking back” to the values of f[0] and f[–1], this also looks back to the value of f[24]. Meanwhile, the evaluation graph for f[51] looks back not only to f[0] and f[–1] but also to f[–3] and f[–27]:

\n
\n
\n

\n

How far back does it look in general? Here’s a plot showing which lookbacks are made as a function of n (with the roots of the “blue graph trees” highlighted):

\n
\n
\n

\n

There’s alternation between behaviors for even and odd n. But apart from that, additional lookbacks are just steadily added as n increases—and indeed the total number of lookbacks seems to follow a simple pattern:

\n
\n
\n

\n

But—just for once—if one looks in more detail, it’s not so simple. The lengths of the successive “blocks” are:

\n
\n
\n

\n
\n
\n

\n

So, yes, the lookbacks are quite “unpredictable”. But the main point here is that—unlike for the P family—the number of lookbacks isn’t limited. In a sense, to compute T212 for progressively larger n, progressively more information about its initial conditions is needed.

\n

When one deals with ordinary, unnested recurrence relations, one’s always dealing with a fixed lookback. And the number of initial conditions then just depends on the lookback. (So, for example, the Fibonacci recurrence has lookback 2, so needs two initial conditions, while the standard factorial recurrence has lookback 1, so needs only one initial condition.)

\n

But for the nested recurrence relation T212 we see that this is no longer true; there can be an unboundedly large lookback.

\n

OK, but let’s look back at the actual T212 sequence. Here it is up to larger values of n:

\n
\n
\n

\n

Or, plotting each point as a dot:

\n
\n
\n

\n

Given the recursive definition of f[n], the values of f[n] must always be powers of 2. This shows where each successive power of 2 is first reached as a function of n:

\n
\n
\n

\n

Meanwhile, this shows the accumulated average of f[n] as a function of n:

\n
\n
\n

\n

This is well fit by 0.38 Log[n], implying that, at least with this averaging, f[n] asymptotically approximates n0.26. And, yes, it is somewhat surprising that what seems like a very “exponential” recursive definition should lead to an f[n] that increases only like a power. But, needless to say, this is the kind of surprise one has to expect in the computational universe.

\n

It’s worth noticing that f[n] fluctuates very intensely as a function of n. The overall distribution of values is very close to exponentially distributed—for example with the distribution of logarithmic values of f[n] for n between 9 million and 10 million being:

\n
\n
\n

\n

What else can we say about this sequence? Let’s say we reduce mod 2 the powers of 2 for each f[n]. Then we get a sequence which starts:

\n
\n
\n

\n

This is definitely not “uniformly random”. But if one look at blocks of sequential values, one can plot at what n each of the 2b possible configurations of a length-b block first appears:

\n
\n
\n

\n

And eventually it seems as if all length-b blocks for any given b will appear.

\n

By the way, whereas in the P family, there were always a limited number of “blue graph trees” (associated with the limited number of initial conditions), for T212 the number of such trees increases with n, as more initial conditions are used. So, for example, here are the trees for f[50] and f[51]:

\n
\n
\n

\n

We’ve so far discussed T212 only with the initial condition f[n] = 1 for n ≤ 0. The fact that f[n] is always a power of 2 relies on every initial value also being a power of 2. But here’s what happens, for example, if f(n) = 2s for n ≤ 0:

\n
\n
\n

\n

In general, one can think of T212 as transforming an ultimately infinite sequence of initial conditions into an infinite sequence of function values, with different forms of initial conditions potentially giving very different sequences of function values:

\n
\n
\n

\n

(Note that not all choices of initial conditions are possible; some lead to “f[n] = f[n]” or f[n] = f[n + 1]” situations, where the evaluation of the function can’t “make progress”.)

\n

The “Summer School” Sequence T311 (f[n_] := 3 f[n – f[n – 1]])

\n

Having explored T212, let’s now look at T311—the original one-term nestedly recursive function discovered at the 2003 Wolfram Summer School:

\n
\n
\n

\n

Here’s its basic behavior:

\n
\n
\n

\n

And here is its evaluation graph—which immediately reveals a lot more lookback than T212:

\n
\n
\n

\n

Plotting lookbacks as a function of n we get:

\n
\n
\n

\n

Much as with T212, the total number of lookbacks varies with n in the fairly simple way (~ 0.44 n):

\n
\n
\n

\n

Continuing the T311 sequence further, it looks qualitatively very much like T212:

\n
\n
\n

\n
\n
\n

\n

And indeed T311—despite its larger number of lookbacks—seems to basically behave like T212. In a story typical of the Principle of Computational Equivalence, T212 seems to have already “filled out the computational possibilities”, so T311 “doesn’t have anything to add”.

\n

The “S Family”: f[n_] := n – f[f[n – a] – b]

\n

As another (somewhat historically motivated) example of nestedly recursive functions, consider what we’ll call the “S family”, defined by:

\n
\n
\n

\n

Let’s start with the very minimal case S10 (or “S1”):

\n
\n
\n

\n

Our standard initial condition f[n] = 1 for n ≤ 0 doesn’t work here, because it implies that f[1] = 1 – f[1]. But if we take f[n] = 1 for n ≤ 1 we get:

\n
\n
\n

\n

Meanwhile, with f[n] = 1 for n ≤ 3 we get:

\n
\n
\n

\n

The first obvious feature of both these results is their overall slope: 1/ϕ ≈ 0.618, where ϕ is the golden ratio. It’s not too hard to see why one gets this slope. Assume that for large n we can take f[n] = σ n. Then substitute this form into both sides of the recursive definition for the S family to get σ n == n – σ (σ (na) – b). For large n all that survives is the condition for the coefficients of n

\n
\n
\n

\n

which has solution σ = 1/ϕ.

\n

Plotting f[n] – n/ϕ for the case f[n] = 1 for n ≤ 1 we get:

\n
\n
\n

\n

The evaluation graph is this case has a fairly simple form

\n
\n
\n

\n

as we can see even more clearly with a different graph layout:

\n
\n
\n

\n

It’s notable that only the initial condition f[1] = 1 is used—leading to a single “blue graph tree” that turns out to have a very simple “Fibonacci tree” form (which, as we’ll discuss below, has been known since the 1970s):

\n
\n
\n

\n

From this it follows that f[n] related to the “Fibonacci-like” substitution system

\n
\n
\n

\n
\n
\n

\n

and in fact the sequence of values of f[n] can be computed just as:

\n
\n
\n

\n

And indeed it turns out that in this case f[n] is given exactly by:

\n
\n
\n

\n

What about when f[n] = 1 not just for n ≤ 1 but beyond? For n ≤ 2 the results are essentially the same as for n ≤ 1. But for n ≤ 3 there’s a surprise: the behavior is considerably more complicated—as we can see if we plot f[n] – n/ϕ:

\n
\n
\n

\n
\n
\n

\n
\n
\n

\n

Looking at the evaluation graph in this case we see that the only initial conditions sampled are f[1] = 1 and f[3] = 1 (with f[2] only being reached if one specifically starts with f[2]):

\n
\n
\n

\n

And continuing the evaluation graph we see a mixture of irregularity and comparative regularity:

\n
\n
\n

\n

The plot of f[n] has a strange “hand-drawn” appearance, with overall regularity but detailed apparent randomness. The most obvious large-scale feature is “bursting” behavior (interspersed in an audio rendering with an annoying hum). The bursts all seem to have approximately (though not exactly) the same structure—and get systematically larger. The lengths of successive “regions of calm” between bursts (characterized by runs with Abs[f[n] – n/ϕ] < 3) seem to consistently increase by a factor ϕ:

\n
\n
\n

\n

What happens to S1 with other initial conditions? Here are a few examples:

\n
\n
\n

\n

So how does Sa depend on a? Sometimes there’s at least a certain amount of clear regularity; sometimes it’s more complicated:

\n
\n
\n

\n

As is very common, adding the parameter b in the definition doesn’t seem to lead to fundamentally new behavior—though for b > 0 the initial condition f[n] = 1, n ≤ 0 can be used:

\n
\n
\n

\n

In all cases, only a limited number of initial conditions are sampled (bounded by the value of a + b in the original definition). But as we can see, the behavior can either be quite simple, or can be highly complex.

\n

More Complicated Rules

\n

Highly complex behavior arises even from very simple rules. It’s a phenomenon one sees all over the computational universe. And we’re seeing it here in nestedly recursive functions. But if we make the rules (i.e. definitions) for our functions more complicated, will we see fundamentally different behavior, or just more of the same?

\n

The Principle of Computational Equivalence (as well as many empirical observations of other systems) suggests that it’ll be “more of the same”: that once one’s passed a fairly low threshold the computational sophistication—and complexity—of behavior will no longer change.

\n

And indeed this is what one sees in nestedly recursive functions. But below the threshold different kinds of things can happen with different kinds of rules.

\n

There are several directions in which we can make rules more complicated. One that we won’t discuss here is to use operations (conditional, bitwise, etc.) that go beyond arithmetic. Others tend to involve adding more instances of f in our definitions.

\n

An obvious way to do this is to take f[n_] to be given by a sum of terms, “Fibonacci style”. There are various specific forms one can consider. As a first example—that we can call ab—let’s look at:

\n
\n
\n

\n

The value of a doesn’t seem to matter much. But changing b we see:

\n
\n
\n

\n

12 has unbounded lookback (at least starting with f[n] = 1 for n ≤ 0), but for larger b, 1b has bounded lookback. In both 13 and 15 there is continuing large-scale structure (here visible in log plots)

\n
\n
\n

\n

though this does not seem to be reflected in the corresponding evaluation graphs:

\n
\n
\n

\n

As another level of Fibonacci-style definition, we can consider ab:

\n
\n
\n

\n

But the typical behavior here does not seem much different from what we already saw with one-term definitions involving only two f’s:

\n
\n
\n

\n
\n
\n

\n

(Note that aa is equivalent to a. Cases like 13 lead after a transient to pure exponential growth.)

\n

A somewhat more unusual case is what we can call abc:

\n
\n
\n

\n

Subtracting overall linear trends we get:

\n
\n
\n

\n
\n
\n

\n

For 111 using initial conditions f[1] = f[2] = 1 and plotting f[n] – n/2 we get

\n
\n
\n

\n

which has a nested structure that is closely related to the result of concatenating binary digit sequences of successive integers:

\n
\n
\n

\n

But despite the regularity in the sequence of values, the evaluation graph for this function is not particularly simple:

\n
\n
\n

\n

So how else might we come up with more complicated rules? One possibility is that instead of “adding f’s by adding terms” we can add f’s by additional nesting. So, for example, we can consider what we can call S31 (here shown with initial condition f[n] = 1 for n ≤ 3):

\n
\n
\n

\n
\n
\n

\n

We can estimate the overall slope here by solving for x in x == 1 – x3 to get ≈ 0.682. Subtracting this off we get:

\n
\n
\n

\n
\n
\n

\n

We can also consider deeper nestings. At depth d the slope is the solution to x == 1 – xd. Somewhat remarkably, in all cases the only initial conditions probed are f[1] = 1 and f[3] = 1:

\n
\n
\n

\n

As another example of “higher nesting” we can consider the class of functions (that we call a):

\n
\n
\n

\n

Subtracting a constant 1/ϕ slope we get:

\n
\n
\n

\n
\n
\n

\n

The evaluation graph for 1 is complicated, but has some definite structure:

\n
\n
\n

\n

What happens if we nest even more deeply, say defining a functions:

\n
\n
\n

\n

With depth-d nesting, we can estimate the overall slope of f[n] by solving for x in

\n
\n
\n

\n

or

\n
\n
\n

\n

so that for the d = 3 case here the overall slope is the real root of or about 0.544. Subtracting out this overall slope we get:

\n
\n
\n

\n
\n
\n

\n

And, yes, the sine-curve-like form of 5 is very odd. Continuing 10x longer, though, things are “squaring off”:

\n
\n
\n

\n

What happens if we continue nesting deeper? stays fairly tame:

\n
\n
\n

\n

However, already allows for more complicated behavior:

\n
\n
\n

\n

And for different values of a there are different regularities:

\n
\n
\n

\n

There are all sorts of other extensions and generalizations one might consider. Some involve alternate functional forms; others involve introducing additional functions, or allowing multiple arguments to our function f.

\n

An Aside: The Continuous Case

\n

In talking about recursive functions f[n] we’ve been assuming—as one normally does—that n is always an integer. But can we generalize what we’re doing to functions f[x] where x is a continuous real number?

\n

Consider for example a continuous analog of the Fibonacci recurrence:

\n
\n
\n

\n

This produces a staircase-like function whose steps correspond to the usual Fibonacci numbers:

\n
\n
\n

\n

Adjusting the initial condition produces a slightly different result:

\n
\n
\n

\n
\n
\n

\n

We can think of these as being solutions to a kind of “Fibonacci delay equation”—where we’ve given initial conditions not at discrete points, but instead on an interval.

\n

So what happens with nestedly recursive functions? We can define an analog of S1 as:

\n
\n
\n

\n

Plotting this along with the discrete result we get:

\n
\n
\n

\n

In more detail, we get

\n
\n
\n

\n

where now the plateaus occur at the (“Wythoff numbers”) .

\n

Changing the initial condition to be x ≤ 3 we get:

\n
\n
\n

\n

Removing the overall slope by subtracting x/ϕ gives:

\n
\n
\n

\n

One feature of the continuous case is that one can continuously change initial conditions—though the behavior one gets typically breaks into “domains” with discontinuous boundaries, as in this case where we’re plotting the value of f[x] as a function of x and the “cutoff” in the initial conditions f[x], x:

\n
\n
\n

\n

So what about other rules? A rule like P312 (f[n_] := 3 + f[nf[n – 2]]) given “constant” initial conditions effectively just copies and translates the initial interval, and gives a simple order-0 interpolation of the discrete case. With initial condition f[x] = x some segments get “tipped”:

\n
\n
\n

\n

All the cases we’ve considered here don’t “look back” to negative values, in either the discrete or continuous case. But what about a rule like T212 (f[n_] := 2 f[n – 1 f[n – 2]]) that progressively “looks back further”? With the initial condition f[x] = 1 for x ≤ 0, one gets the same result as in the discrete case:

\n
\n
\n

\n

But if one uses the initial condition f[x ] = Abs[x – 1] for x ≤ 0 (the Abs[x 1] is needed to avoid ending up with f[x] depending on f[y] for y > x) one instead has

\n
\n
\n

\n

yielding the rather different result:

\n
\n
\n

\n

Continuing for larger x (on a log scale) we get:

\n
\n
\n

\n

Successively zooming in on one of the first “regions of noise” we see that it ultimately consists just of a large number of straight segments:

\n
\n
\n

\n

What’s going on here? If we count the number of initial conditions that are used for different values of x we see that this has discontinuous changes, leading to disjoint segments in f[x]:

\n
\n
\n

\n

Plotting over a larger range of x values the number of initial conditions used is:

\n
\n
\n

\n

And plotting the actual values of those initial conditions we get:

\n
\n
\n

\n

If we go to later, “more intense” regions of noise, we see more fragmentation—and presumably in the limit x ∞ we get the analog of an essential singularity in f[x]:

\n
\n
\n

\n

For the S family, with its overall n/ϕ trend, even constant initial conditions—say for S1—already lead to tipping, here shown compared to the discrete case:

\n
\n
\n

\n

How Do You Actually Compute Recursive Functions?

\n

Let’s say we have a recursive definition—like the standard Fibonacci one:

\n
\n
\n

\n

How do we actually use this to compute the value of, say, f[7]? Well, we can start from f[7], then use the definition to write this as f[6] + f[5], then write f[6] as f[5] + f[4], and so on. And we can represent this using a evaluation graph, in the form:

\n
\n
\n

\n

But this computation is in a sense very wasteful; for example, it’s independently computing f[3] five separate times (and of course getting the same answer each time). But what if we just stored each f[n] as soon as we compute, and then just retrieve that stored (“cached”) value whenever we need it again?

\n

In the Wolfram Language, it’s a very simple change to our original definition:

\n
\n
\n

\n

And now our evaluation graph becomes much simpler:

\n
\n
\n

\n

And indeed it’s this kind of minimal evaluation graph that we’ve been using in everything we’ve discussed so far.

\n

What’s the relationship between the “tree” evaluation graph, and this minimal one? The tree graph is basically an “unrolled” version of the minimal graph, in which all the possible paths that can be taken from the root node to the initial condition nodes have been treed out.

\n

In general, the number of edges that come out of a single node in a evaluation graph will be equal to the number of instances of the function f that appear on the right-hand side of the recursive definition we’re using (i.e. 2 in the case of the standard Fibonacci definition). So this means that if the maximum length of path from the root to the initial conditions is s, the maximum number of nodes that can appear in the “unrolled” graph is 2s. And whenever there are a fixed set of initial conditions (i.e. if there’s always the same lookback), the maximum path length is essentially n—implying in the end that the maximum possible number of nodes in the unrolled graph will be 2n.

\n

(In the actual case of the Fibonacci recurrence, the number of nodes in the unrolled graph is, or about 1.6n.)

\n

But if we actually evaluate f[7]—say in the Wolfram Language—what is the sequence of f[n]’s that we’ll end up computing? Or, in effect, how will the evaluation graph be traversed? Here are the results for the unrolled and minimal evaluation graphs—i.e. without and with caching:

\n
\n
\n

\n

Particularly in the first case this isn’t the only conceivable result we could have gotten. It’s the way it is here because of the particular “leftmost innermost” evaluation order that the Wolfram Language uses by default. In effect, we’re traversing the graph in a depth-first way. In principle we could use other traversal orders, leading to f[n]’s being evaluated in different orders. But unless we allow other operations (like f[3] + f[3] 2 f[3]) to be interspersed with f evaluations, we’ll still always end up with the same number of f evaluations for a given evaluation graph.

\n

But which is the “correct” evaluation graph? The unrolled one? Or the minimal one? Well, it depends on the computational primitives we’re prepared to use. With a pure stack machine, the unrolled graph is the only one possible. But if we allow (random-access) memory, then the minimal graph becomes possible.

\n

OK, so what happens with nestedly recursive functions? Here, for example, are unrolled and minimal graphs for T212:

\n
\n
\n

\n
\n
\n

\n

Here are the sequences of f[n]’s that are computed:

\n
\n
\n

\n

And here’s a comparison of the number of nodes (i.e. f evaluations) from unrolled and minimal evaluation graphs (roughly 1.2n and 0.5 n, respectively):

\n
\n
\n

\n

Different recursive functions lead to different patterns of behavior. The differences are less obvious in evaluation graphs, but can be quite obvious in the actual sequence of f[n]’s that are evaluated:

\n
\n
\n

\n

But although looking at evaluation sequences from unrolled evaluation graphs can be helpful as a way of classifying behavior, the exponentially more steps involved in the unrolled graph typically makes this impractical in practice.

\n

Primitive Recursive or Not?

\n

Recursive functions have a fairly long history, that we’ll be discussing below. And for nearly a hundred years there’s been a distinction made between “primitive recursive functions” and “general recursive functions”. Primitive recursive functions are basically ones where there’s a “known-in-advance” pattern of computation that has to be done; general recursive functions are ones that may in effect make one have to “search arbitrarily far” to get what one needs.

\n

In Wolfram Language terms, primitive recursive functions are roughly ones that can be constructed directly using functions like Nest and Fold (perhaps nested); general recursive functions can also involve functions like NestWhile and FoldWhile.

\n

So, for example, with the Fibonacci definition

\n
\n
\n

\n

the function f[n] is primitive recursive and can be written, say, as:

\n
\n
\n

\n

Lots of the functions one encounters in practice are similarly primitive recursive—including most “typical mathematical functions” (Plus, Power, GCD, Prime, …). And for example functions that give the results of n steps in the evolution of a Turing machine, cellular automaton, etc. are also primitive recursive. But functions that for example test whether a Turing machine will ever halt (or give the state that it achieves if and when it does halt) are not in general primitive recursive.

\n

On the face of it, our nestedly recursive functions seem like they must be primitive recursive, since they don’t for example appear to be “searching for anything”. But things like the presence of longer and longer lookbacks raise questions. And then there’s the potential confusion of the very first example (dating from the late 1920s) of a recursively defined function known not to be primitive recursive: the Ackermann function.

\n

The Ackermann function has three (or sometimes two) arguments—and, notably, its definition (here given in its classic form) includes nested recursion:

\n
\n
\n

\n
\n
\n

\n

This is what the evaluation graphs look like for some small cases:

\n
\n
\n

\n
\n
\n

\n

Looking at these graphs we can begin to see a pattern. And in fact there’s a simple interpretation: f[m, x, y] for successive m is doing progressively more nested iterations of integer successor operations. f[0, x, y] computes x + y; f[1, x, y] does “repeated addition”, i.e. computes x × y; f[2, x, y] does “repeated multiplication”, i.e. computes xy; f[3, x, y] does “tetration”, i.e. computes the “power tower” Nest[x#&, 1, y]; etc.

\n

Or, alternatively, these can be given explicitly in successively more nested form:

\n
\n
\n

\n
\n
\n

\n
\n
\n

\n
\n
\n

\n
\n
\n

\n

\n

And at least in this form f[m, x, y] involves m nestings. But a given primitive recursive function can involve only a fixed number of nestings. It might be conceivable that we could rewrite f[m, x, y] in certain cases to involve only a fixed number of nestings. But if we look at f[m, m, m] then this turns out to inevitably grow too rapidly to be represented by a fixed number of nestings—and thus cannot be primitive recursive.

\n

But it turns out that the fact that this can happen depends critically on the Ackermann function having more than one argument—so that one can construct the “diagonal” f[m, m, m].

\n

So what about our nestedly recursive functions? Well, at least in the form that we’ve used them, they can all be written in terms of Fold. The key idea is to accumulate a list of values so far (conveniently represented as an association)—sampling whichever parts are needed—and then at the end take the last element. So for example the “Summer School function” T311

\n
\n
\n

\n

can be written:

\n
\n
\n

\n

An important feature here is that we’re getting Lookup to give 1 if the value it’s trying to look up hasn’t been filled in yet, implementing the fact that f[n] = 1 for n ≤ 0.

\n

So, yes, our recursive definition might look back further and further. But it always just finds value 1—which is easy for us to represent without, for example, any extra nesting, etc.

\n

The ultimate (historical) definition of primitive recursion, though, doesn’t involve subsets of the Wolfram Language (the definition was given almost exactly 100 years too early!). Instead, it involves a specific set of simple primitives:

\n
\n
\n

\n

(An alternative, equivalent definition for recursion—explicitly involving Fold—is r[g_, h_] := Fold[{u, v} h[u, v, ##2]], g[##2], Range[0, #1 – 1]] &.)

\n

So can our nestedly recursive functions be written purely in terms of these primitives? The answer is yes, though it’s seriously complicated. A simple function like Plus can for example be written as r[p[1], s], so that e.g. r[p[1], s][2,3]5. Times can be written as r[z, c[Plus, p[1], p[3]]] or r[z, c[r[p[1], s], p[1], p[3]]], while Factorial can be written as r[c[s, z], c[Times, p[1], c[s, p[2]]]]. But even Fibonacci, for example, seems to require a very much longer specification.

\n

In writing “primitive-recursive-style” definitions in Wolfram Language we accumulated values in lists and associations. But in the ultimate definition of primitive recursion, there are no such constructs; the only form of “data” is positive integers. But for our definitions of nestedly recursive functions we can use a “tupling function” that “packages up” any list of integer values into a single integer (and an untupling function that unpacks it). And we can do this say based on a pairing (2-element-tupling) function like:

\n
\n
\n

\n
\n
\n

\n

But what about the actual If[n ≤0, 1, ...] lookback test itself? Well, If can be written in primitive recursive form too: for example, r[c[s, z], c[f, c[s, p[2]]]][n] is equivalent to If[n ≤ 0, 1, f[n]].

\n

So our nestedly recursive functions as we’re using them are indeed primitive recursive. Or, more strictly, finding values f[n] is primitive recursive. Asking questions like “For what n does f[n] reach 1000?” might not be primitive recursive. (The obvious way of answering them involves a FoldWhile-style non-primitive-recursive search, but proving that there’s no primitive recursive way to answer the question is likely very much harder.)

\n

By the way, it’s worth commenting that while for primitive recursive functions it’s always possible to compute a value f[n] for any n, that’s not necessarily true for general recursive functions. For example, if we ask “For what n does f[n] reach 1000?” there might simply be no answer to this; f[n] might never reach 1000. And when we look at the computations going on underneath, the key distinction is that in evaluating primitive recursive functions, the computations always halt, while for general recursive functions, they may not.

\n

So, OK. Our nestedly recursive functions can be represented in “official primitive recursive form”, but they’re very complicated in that form. So that raises the question: what functions can be represented simply in this form? In A New Kind of Science I gave some examples, each minimal for the output it produces:

\n
\n
\n

\n

And then there’s the most interesting function I found:

\n
\n
\n

\n

It’s the simplest primitive recursive function whose output has no obvious regularity:

\n
\n
\n

\n
\n
\n

\n
\n
\n

\n

Because it’s primitive recursive, it’s possible to express it in terms of functions like Fold—though it’s two deep in those, making it in some ways more complicated (at least as far as the Grzegorczyk hierarchy that counts “Fold levels” is concerned) than our nestedly recursive functions:

\n
\n
\n

\n

But there’s still an issue to address with nestedly recursive functions and primitive recursion. When we have functions (like T212) that “reach back” progressively further as n increases, there’s a question of what they’ll find. We’ve simply assumed f[n] = 1 for n ≤0. But what if there was something more complicated there? Even if f[–m] was given by some primitive recursive function, say p[m], it seems possible that in computing f[n] one could end up somehow “bouncing back and forth” between positive and negative arguments, and in effect searching for an m for which p[m] has some particular value, and in doing that searching one could find oneself outside the domain of primitive recursive functions.

\n

And this raises yet another question: are all definitions we can give of nestedly recursive functions consistent? Consider for example:

\n
\n
\n

\n

Now ask: what is f[1]? We apply the recursive definition. But it gives us f[1] = 1 – f[f[0]] or f[1] = 1 – f[1], or, in other words, an inconsistency. There are many such inconsistencies that seem to “happen instantly” when we apply definitions. But it seems conceivable that there could be “insidious inconsistencies” that show up only after many applications of a recursive definition. And it’s also conceivable that one could end up with “loops” like f[i] = f[i]. And things like this could be reasons that f[n] might not be a “total function”, defined for all n.

\n

We’ve seen all sorts of complex behavior in nestedly recursive functions. And what the Principle of Computational Equivalence suggests is that whenever one sees complex behavior, one must in some sense be dealing with computations that are “as sophisticated as any computation can be”. And in particular one must be dealing with computations that can somehow support computation universality.

\n

So what would it mean for a nestedly recursive function to be universal? For a start, one would need some way to “program” the function. There seem to be a couple of possibilities. First, one could imagine packing both “code” and “data” into the argument n of f[n]. So, for example, one might use some form of tupling function to take a description of a rule and an initial state for a Turing machine, together with a specification of a step number, then package all these things into an integer n that one feeds into one’s universal nestedly recursive function f. Then the idea would be that the value computed for f[n] could be decoded to give the state of the Turing machine at the specified step. (Such a computation by definition always halts—but much as one computes with Turing machines by successively asking for the next steps in their evolution, one can imagine setting up a “harness” that just keeps asking for values of f[n] at an infinite progression of values n.)

\n

Another possible approach to making a universal nestedly recursive function is to imagine feeding in a “program” through the initial conditions one gives for the function. There might well need to be decoding involved, but in some sense what one might hope is that just by changing its initial conditions one could get a nestedly recursive function with a specific recursive definition to emulate a nestedly recursive function with any other recursive definition (or, say, for a start, any linear recurrence).

\n

Perhaps one could construct a complicated nestedly recursive function that would have this property. But what the Principle of Computational Equivalence suggests is that it should be possible to find the property even in “naturally occurring cases”—like P312 or T212.

\n

The situation is probably going to be quite analogous to what happened with the rule 110 cellular automaton or the s = 2, k = 3 596440 Turing machine. By looking at the actual typical behavior of the system one got some intuition about what was likely to be going on. And then later, with great effort, it became possible to actually prove computation universality.

\n

In the case of nestedly recursive functions, we’ve seen here examples of just how diverse the behavior generated by changing initial conditions can be. It’s not clear how to harness this diversity to extract some form of universality. But it seems likely that the “raw material” is there. And that nestedly recursive functions will show themselves as able join so many other systems in fitting into the framework defined by the Principle of Computational Equivalence.

\n

Some History

\n

Once one has the concept of functions and the concept of recursion, nestedly recursive functions aren’t in some sense a “complicated idea”. And between this fact and the fact that nestedly recursive functions haven’t historically had a clear place in any major line of mathematical or other development it’s quite difficult to be sure one’s accurately tracing their history. But I’ll describe here at least what I currently know.

\n

The concept of something like recursion is very old. It’s closely related to mathematical induction, which was already being used for proofs by Euclid around 300 BC. And in a quite different vein, around the same time (though not recorded in written form until many centuries later) Fibonacci numbers arose in Indian culture in connection with the enumeration of prosody (“How many different orders are there in which to say the Sanskrit words in this veda?”).

\n

Then in 1202 Leonardo Fibonacci, at the end of his calculational math book Liber Abaci (which was notable for popularizing Hindu-Arabic numerals in the West) stated—more or less as a recreational example—his “rabbit problem” in recursive form, and explicitly listed the Fibonacci numbers up to 377. But despite this early appearance, explicit recursively defined sequences remained largely a curiosity until as late as the latter part of the twentieth century.

\n

The concept of an abstract function began to emerge with calculus in the late 1600s, and became more solidified in the 1700s—but basically always in the context of continuous arguments. A variety of specific examples of recurrence relations—for binomial coefficients, Bernoulli numbers, etc.—were in fairly widespread use. But there didn’t seem to have yet been a sense that there was a general mathematical structure to study.

\n

In the course of the 1800s there had been an increasing emphasis on rigor and abstraction in mathematics, leading by the latter part of the century to a serious effort to axiomatize concepts associated with numbers. Starting with concepts like the recursive definition of integers by repeated application of the successor operation, by the time of Peano’s axioms for arithmetic in 1891 there was a clear general notion (particularly related to the induction axiom) that (integer) functions could be defined recursively. And when David Hilbert’s program of axiomatizing mathematics got underway at the beginning of the 1900s, it was generally assumed that all (integer) functions of interest could actually be defined specifically using primitive recursion.

\n

The notation for recursively specifying functions gradually got cleaner, making it easier to explore more elaborate examples. And in 1927 Wilhelm Ackermann (a student of Hilbert’s) introduced (in completely modern notation) a “reasonable mathematical function” that—as we discussed above—he showed was not primitive recursive. And right there, in his paper, without any particular comment, is a nestedly recursive function definition:

\n

Ackermann nestedly recursive function paper

\n

Ackermann nestedly recursive function definition

\n

In 1931 Kurt Gödel further streamlined the representation of recursion, and solidified the notion of general recursion. There soon developed a whole field of recursion theory—though most of it was concerned with general issues, not with specific, concrete recursive functions. A notable exception was the work of Rózsa Péter (Politzer), beginning in the 1930s, and leading in 1957 to her book Recursive Functions—which contains a chapter on “Nested Recursion” (here in English translation):

\n

Nested recursion book chapter

\n

But despite the many specific (mostly primitive) recursive functions discussed in the rest of the book, this chapter doesn’t stray far from the particular function Ackermann defined (or at least Péter’s variant of it).

\n

What about the recreational mathematics literature? By the late 1800s there were all sorts of publications involving numbers, games, etc. that at least implicitly involved recursion (an example being Édouard Lucas’s 1883 Tower of Hanoi puzzle). But—perhaps because problems tended to be stated in words rather than mathematical notation—it doesn’t seem as if nestedly recursive functions ever showed up.

\n

In the theoretical mathematics literature, a handful of somewhat abstract papers about “nested recursion” did appear, an example being one in 1961 by William Tait, then at Stanford:

\n

Nested recursion paper by William Tait

\n

But, meanwhile, the general idea of recursion was slowly beginning to go from purely theoretical to more practical. John McCarthy—who had coined the term “artificial intelligence”—was designing LISP as “the language for AI” and by 1960 was writing papers with titles like “Recursive Functions of Symbolic Expressions and Their Computation by Machine”.

\n

In 1962 McCarthy came to Stanford to found the AI Lab there, bringing with him enthusiasm for both AI and recursive functions. And by 1968 these two topics had come together in an effort to use “AI methods” to prove properties of programs, and in particular programs involving recursive functions. And in doing this, John McCarthy came up with an example he intended to be awkward—that’s exactly a nestedly recursive function:

\n

John McCarthy nestedly recursive function example

\n

In our notation, it would be:

\n
\n
\n

\n

And it became known as “McCarthy’s 91-function” because, yes, for many n, f[n] = 91. These days it’s trivial to evaluate this function—and to find out that f[n] = 91 only up to n = 102:

\n
\n
\n

\n

But even the evaluation graph is somewhat large

\n
\n
\n

\n

and in pure recursive evaluation the recursion stack can get deep—which back then was a struggle for LISP systems to handle.

\n

There were efforts at theoretical analysis, for example by Zohar Manna, who in 1974 published Mathematical Theory of Computation which—in a section entitled “Fixpoints of Functionals”—presents the 91-function and other nestedly recursively functions, particularly in the context of evaluation-order questions.

\n

In the years that followed, a variety of nestedly recursive functions were considered in connection with proving theorems about programs, and with practical assessments of LISP systems, a notable example being Ikuo Takeuchi’s 1978 triple recursive function:

\n

Ikuo Takeuchi triple recursive function example

\n

But in all these cases the focus was on how these functions would be evaluated, not on what their behavior would be (and it was typically very simple).

\n

But now we have to follow another thread in the story. Back in 1961, right on the Stanford campus, a then-16-year-old Douglas Hofstadter was being led towards nestedly recursive functions. As Doug tells it, it all started with him seeing that squares are interspersed with gaps of 1 or 2 between triangular numbers, and then noticing patterns in those gaps (and later realizing that they showed nesting). Meanwhile, at Stanford he had access to a computer running Algol, a language which (like LISP and unlike Fortran) supported recursion (though this wasn’t particularly advertised, since recursion was still generally considered quite obscure).

\n

And as Doug tells it, within a year or two he was using Algol to do things like recursively create trees representing English sentences. Meanwhile—in a kind of imitation of the Eleusis “guess-a-card-rule” game—Doug was apparently challenging his fellow students to a “function game” based on guessing a math function from specified values. And, as he tells it, he found that functions that were defined recursively were the ones people found it hardest to guess.

\n

That was all in the early 1960s, but it wasn’t until the mid-1970s that Doug Hofstadter returned to such pursuits. After various adventures, Doug was back at Stanford—writing what became his book Gödel, Escher, Bach. And in 1977 he sent a letter to Neil Sloane, creator of the 1973 A Handbook of Integer Sequences (and what’s now the Online Encyclopedia of Integer Sequences, or OEIS):

\n

Douglas Hofstadter letter to Neil Sloane

\n

As suggested by the accumulation of “sequence ID” annotations on the letter, Doug’s “eta sequences” had actually been studied in number theory before—in fact, since at least the 1920s (they are now usually called Beatty sequences). But the letter went on, now introducing some related sequences—that had nestedly recursive definitions:

\n

Sequences with nestedly recursive definitions

\n

As Doug pointed out, these particular sequences (which were derived from golden ratio versions of his “eta sequences”) have a very regular form—which we would now call nested. And it was the properties of this form that Doug seemed most concerned about in his letter. But actually, as we saw above, just a small change in initial conditions in what I’m calling S1 would have led to much wilder behavior. But that apparently wasn’t something Doug happened to notice. A bit later in the letter, though, there was another nestedly recursive sequence—that Doug described as a “horse of an entirely nother color”: the “absolutely CRAZY” Q sequence:

\n

Crazy Q sequence

\n

Two years later, Doug’s Gödel, Escher, Bach book was published—and in it, tucked away at the bottom of page 137, a few pages after a discussion of recursive generation of text (with examples such as “the strange bagels that the purple cow without horns gobbled”), there was the Q sequence:

\n

Chaotic Q sequence

\n

Strangely, though, there was no picture of it, and Doug listed only 17 terms (which, until I was writing this, was all I assumed he had computed):

\n

17 Q-sequence terms

\n

So now nestedly recursive sequences were out in the open—in what quickly became a very popular book. But I don’t think many people noticed them there (though, as I’ll discuss, I did). Gödel, Escher, Bach is primarily a playful book focused on exposition—and not the kind of place you’d expect to find a new, mathematical-style result.

\n

Still—quite independent of the book—Neil Sloane showed Doug’s 1977 letter to his Bell Labs colleague Ron Graham, who within a year made a small mention of the Q sequence in a staid academic math publication (in a characteristic “state-it-as-a-problem” Erdös form):

\n

Erdös and Graham math paper

\n

Erdös and Graham math paper continued

\n

There was a small and tight-knit circle of serious mathematicians—essentially all of whom, as it happens, I personally knew—who would chase these kinds of easy-to-state-but-hard-to-solve problems. Another was Richard Guy, who soon included the sequence as part of problem E31 in his Unsolved Problems in Number Theory, and mentioned it again a few years later.

\n

But for most of the 1980s little was heard about the sequence. As it later turns out, a senior British applied mathematician named Brian Conolly (who wasn’t part of the aforementioned tight-knit circle) had—presumably as a kind of hobby project—made some progress, and in 1986 had written to Guy about it. Guy apparently misplaced the letter, but later told Conolly that John Conway and Sol Golomb had worked on similar things.

\n

Conway presumably got the idea from Hofstadter’s work (though he had a habit of obfuscating his sources). But in any case, on July 15, 1988, Conway gave a talk at Bell Labs entitled “Some Crazy Sequences” (note the word “crazy”, just like in Hofstadter’s letter to Sloane) in which he discussed the regular-enough-to-be-mathematically-interesting sequence (which we’re calling G3111 here):

\n
\n
\n

\n
\n
\n

\n

Despite its visual regularity, Conway couldn’t mathematically prove certain features of the wiggles in the sequence—and in his talk offered a $10,000 prize for anyone who could. By August a Bell Labs mathematician named Colin Mallows had done it. Conway claimed (later to be contradicted by video evidence) that he’d only offered $1000—and somehow the whole affair landed as a story in the August 30 New York Times under the heading “Intellectual Duel: Brash Challenge, Swift Response”. But in any case, this particular nestedly recursive sequence became known as “Conway’s Challenge Sequence”.

\n

So what about Sol Golomb? It turns out he’d started writing a paper—though never finished it:

\n

Discrete Chaos paper

\n

Discrete Chaos paper continued

\n

He’d computed 280 terms of the Q sequence (he wasn’t much of a computer user) and noticed a few coincidences. But he also mentioned another kind of nestedly recursive sequence, no doubt inspired by his work on feedback shift registers:

\n
\n
\n

\n

As he noted, the behavior depends greatly on the initial conditions, though is always eventually periodic—with his student Unjeng Cheng having found long-period examples.

\n

OK, so by 1988 nestedly recursive functions had at least some notoriety. So what happened next? Not so much. There’s a modest academic literature that’s emerged over the last few decades, mostly concentrated very specifically around “Conway’s Challenge Sequence”, Hofstadter’s Q function, or very similar “meta Fibonacci” generalizations of them. And so far as I know, even the first published large-scale picture of the Q sequence only appeared in 1998 (though I had pictures of it many years earlier):

\n

Klaus Pinn Q-sequence paper

\n

Klaus Pinn Q-sequence paper continued

\n

Why wasn’t more done with nestedly recursive functions? At some level it’s because they tend to have too much computational irreducibility—making it pretty difficult to say much about them in traditional mathematical ways. But perhaps more important, studying them broadly is really a matter of ruliology: it requires the idea of exploring spaces of rules, and of expecting the kinds of behavior and phenomena that are characteristic of systems in the computational universe. And that’s something that’s still not nearly as widely understood as it should be.

\n

My Personal Story with Nestedly Recursive Functions

\n

I think 1979 was the year when I first took recursion seriously. I’d heard about the Fibonacci sequence (though not under that name) as a young child a decade earlier. I’d implicitly (and sometimes explicitly) encountered recursion (sometimes through error messages!) in computer algebra systems I’d used. In science, I’d studied fractals quite extensively (Benoit Mandelbrot’s book having appeared in 1977), and I’d been exposed to things like iterated maps. And I’d quite extensively studied cascade processes, notably of quarks and gluons in QCD.

\n

As I think about it now, I realize that for several years I’d written programs that made use of recursion (and I had quite a lot of exposure to LISP, and the culture around it). But it was in 1979—having just started using C—that I first remember writing a program (for doing percolation theory) where I explicitly thought “this is using recursion”. But then, in late 1979, I began to design SMP (“Symbolic Manipulation Program”), the forerunner of the modern Wolfram Language. And in doing this I quickly solidified my knowledge of mathematical logic and the (then-fledgling) field of theoretical computer science.

\n

My concept of repeated transformations for symbolic expressions—which is still the core of Wolfram Language today—is somehow fundamentally recursive. And by the time we had the first signs of life for our SMP system, Fibonacci was one of our very first tests. We soon tried the Ackermann function too. And in 1980 I became very interested in the problem of evaluation order, particularly for recursive functions—and the best treatment I found of it (though at the time not very useful to me) was in none other than the book by Zohar Manna that I mentioned above. (In a strange twist, I was at that time also studying gauge choices in physics—and it was only last year that I realized that they’re fundamentally the same thing as evaluation orders.)

\n

It was soon after it came out in 1979 that I first saw Douglas Hofstadter’s book. At the time I wasn’t too interested in its Lewis-Carroll-like aspects, or its exposition; I just wanted to know what the “science meat” in it was. And somehow I found the page about the Q sequence, and filed it away as “something interesting”.

\n

I’m not sure when I first implemented the Q sequence in SMP, but by the time we released Version 1.0 in July 1981, there it was: an external package (hence the “X” prefix) for evaluating “Hofstadter’s recursive function”, elegantly using memoization—with the description I gave saying (presumably because that’s what I’d noticed) that its values “have several properties of randomness”:

\n

Hofstadter recursive function

\n

Firing up a copy of SMP today—running on a virtual machine that still thinks it’s 1986—I can run this code, and easily compute the function:

\n

SMP evaluation

\n

I can even plot it—though without an emulator for a 1980s-period storage-tube display, only the ASCIIfied rendering works:

\n

ASCIIfied rendering

\n

So what did I make of the function back in 1981? I was interested in how complexity and randomness could occur in nature. But at the time, I didn’t have enough of a framework to understand the connection. And, as it was, I was just starting to explore cellular automata, which seemed a lot more “nature like”—and which soon led me to things like rule 30 and the phenomenon of computational irreducibility.

\n

Still, I didn’t forget the Q sequence. And when I was building Mathematica I again used it as a test (the .tq file extension came from the brief period in 1987 when we were trying out “Technique” as the name of the system):

\n

Combinatorial functions

\n

Combinatorial functions continued

\n

When Mathematica 1.0 was released on June 23, 1988, the Q sequence appeared again, this time as an example in the soon-in-every-major-bookstore Mathematica book:

\n

Q sequence in Mathematica book

\n

Q sequence in Mathematica book continued

\n

I don’t think I was aware of Conway’s lecture that occurred just 18 days later. And for a couple of years I was consumed with tending to a young product and a young company. But by 1991, I was getting ready to launch into basic science again. Meanwhile, the number theorist (and today horologist) Ilan Vardi—yet again from Stanford—had been working at Wolfram Research and writing a book entitled Computational Recreations in Mathematica, which included a long section on the analysis of Takeuchi’s nested recursive function (“TAK”). My email archive records an exchange I had with him:

\n

Wolfram–Vardi email

\n

He suggested a “more symmetrical” nested recursive function. I responded, including a picture that made it fairly clear that this particular function would have only nested behavior, and not seem “random”:

\n

Wolfram–Vardi followup email

\n

Nested recursive function graphic

\n

By the summer of 1991 I was in the thick of exploring different kinds of systems with simple rules, discovering the remarkable complexity they could produce, and filling out what became Chapter 3 of A New Kind of Science: “The World of Simple Programs”. But then came Chapter 4: “Systems Based on Numbers”. I had known since the mid-1980s about the randomness in things like digit sequences produced by successive arithmetic operations. But what about randomness in pure sequences of integers? I resolved to find out just what it would take to produce randomness there. And so it was that on August 13, 1993, I came to be enumerating possible symbolic forms for recursive functions—and selecting ones that could generate at least 10 terms:

\n

Symbolic forms for recursive functions

\n

As soon as I plotted the “survivors” I could see that interesting things were going to happen:

\n

Recursive function graphs

\n

Was this complexity somehow going to end? I checked out to 10 million terms. And soon I started collecting my “prize specimens” and making a gallery of them:

\n

Recursive functions gallery

\n

I had some one-term recurrences, and some two-term ones. Somewhat shortsightedly I was always using “Fibonacci-like” initial conditions f[1] = f[2] = 1—and I rejected any recurrence that tried to “look back” to f[0], f[–1], etc. And at the time, with this constraint, I only found “really interesting” behavior in two-term recurrences.

\n

In 1994 I returned briefly to recursive sequences, adding a note “solving” a few of them, and discussing the evaluation graphs of others:

\n

Properties of sequences

\n

Evaluation graphs

\n

When I finally finished A New Kind of Science in 2002, I included a list of historical “Close approaches” to its core discoveries, one of them being Douglas Hofstadter’s work on recursive sequences:

\n

Douglas Hofstadter work on recursive sequences

\n

In retrospect, back in 1981 I should have been able to take the “Q sequence” and recognize in it the essential “rule 30 phenomenon”. But as it was, it took another decade—and many other explorations in the computational universe—for me to build up the right conceptual framework to see this. In A New Kind of Science I studied many kinds of systems, probing them far enough, I hoped, to see their most important features. But recursive functions were an example where I always felt there was more to do; I felt I’d only just scratched the surface.

\n

In June 2003—a year after A New Kind of Science was published—we held our first summer school. And as a way to introduce methodology—and be sure that people knew I was fallible and approachable—I decided on the first day of the summer school to do a “live experiment”, and try to stumble my way to discovering something new, live and in public.

\n

A few minutes before the session started, I picked the subject: recursive functions. I began with some examples I knew. Then it was time to go exploring. At first lots of functions “didn’t work” because they were looking back too far. But then someone piped up “Why not just say that f[n] is 1 whenever n isn’t a positive integer?” Good idea! And very easy to try.

\n

Soon we had the “obvious” functions written (today Apply[Plus, ...] could be just Total[...], but otherwise there’s nothing “out of date” here):

\n
\n
\n

\n

In a typical story of Wolfram-Language-helps-one-think-clearly, the obvious function was also very general, and allowed a recurrence with any number of terms. So why not start with just one term? And immediately, there it was—what we’re now calling T311:

\n

T311

\n

And then a plot (yes, after Version 6 one didn’t need the Show or the trailing “;”):

\n

RSValues plot

\n

Of course, as is the nature of computational constructions, this is something timeless—that looks the same today as it did 21 years ago (well, except that now our plots display with color by default).

\n

I thought this was a pretty neat discovery. And I just couldn’t believe that years earlier I’d failed to see the obvious generalization of having “infinite” initial conditions.

\n

The next week I did a followup session, this time talking about how one would write up a discovery like this. We started off with possible titles (including audience suggestions):

\n

Suggested titles

\n

And, yes, the first title listed is exactly the one I’ve now used here. In the notebook I created back then, there were first some notes (some of which should still be explored!):

\n

Title notes

\n

Three hours later (on the afternoon of July 11, 2003) there’s another notebook, with the beginnings of a writeup:

\n

Initial recursive functions writeup

\n

By the way, another thing we came up with at the summer school was the (non-nestedly) recursive function:

\n
\n
\n

\n

Plotting g[n + 1] – g[n] gives:

\n
\n
\n

\n

And, yes, bizarrely (and reminiscent of McCarthy’s 91-function) for n ≥ 396, g[n + 1] – g[n] is always 97, and g[n] = 38606 + 97 (n – 396).

\n

But in any case, a week or so after my “writeups” session, the summer school was over. In January 2004 I briefly picked the project up again, and made some pictures that, yes, show interesting structure that perhaps I should investigate now:

\n

f[n - f[n - 1]]

\n

In the years that followed, I would occasionally bring nestedly recursive functions out again—particularly in interacting with high school and other students. And at our summer programs I suggested projects with them for a number of students.

\n

In 2008, they seemed like an “obvious interesting thing” to add to our Demonstrations Project:

\n

NKS summer school live experiment

\n

But mostly, they languished. Until, that is, my burst of “finish this” intellectual energy that followed the launch of our Physics Project in 2020. So here now, finally, after a journey of 43 years, I feel like I’ve been able to do some justice to nestedly recursive functions, and provided a bit more illumination to yet another corner of the computational universe.

\n

(Needless to say, there are many, many additional questions and issues to explore. Different primitives, e.g. Mod, Floor, etc. Multiple functions that refer to each other. Multiway cases. Functions based on rational numbers. And endless potential approaches to analysis, identifying pockets of regularity and computational reducibility.)

\n

Thanks

\n

Thanks to Brad Klee for extensive help. Thanks also to those who’ve worked on nestedly recursive functions as students at our summer programs over the years, including Roberto Martinez (2003), Eric Rowland (2003), Chris Song (2021) and Thomas Adler (2024). I’ve benefitted from interactions about nestedly recursive functions with Ilan Vardi (1991), Tal Kubo (1993), Robby Villegas (2003), Todd Rowland (2003 etc.), Jim Propp (2004), Matthew Szudzik (2005 etc.), Joseph Stocke (2021 etc.), Christopher Gilbert (2024) and Max Niedermann (2024). Thanks to Doug Hofstadter for extensive answers to questions about history for this piece. It’s perhaps worth noting that I’ve personally known many of the people mentioned in the history section here (with the dates I met them indicated): John Conway (1984), Paul Erdös (1986), Sol Golomb (1981), Ron Graham (1983), Benoit Mandelbrot (1986), John McCarthy (1981) and Neil Sloane (1983).

\n

\n

Bibliography of Nestedly Recursive Functions »

\n", + "category": "Computational Science", + "link": "https://writings.stephenwolfram.com/2024/09/nestedly-recursive-functions/", + "creator": "Stephen Wolfram", + "pubDate": "Fri, 27 Sep 2024 17:50:59 +0000", + "enclosure": "", + "enclosureType": "", + "image": "", + "id": "", + "language": "en", + "folder": "", + "feed": "wolfram", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "2aa842110c5cd8056215f8dc4c2d2d16", + "highlights": [] + }, + { + "title": "Five Most Productive Years: What Happened and What’s Next", + "description": "\"\"So… What Happened? Today is my birthday—for the 65th time. Five years ago, on my 60th birthday, I did a livestream where I talked about some of my plans. So… what happened? Well, what happened was great. And in fact I’ve just had the most productive five years of my life. Nine books. 3939 pages […]", + "content": "\"\"

\"Five

\n

So… What Happened?

\n\n

\n

Today is my birthday—for the 65th time. Five years ago, on my 60th birthday, I did a livestream where I talked about some of my plans. So… what happened? Well, what happened was great. And in fact I’ve just had the most productive five years of my life. Nine books. 3939 pages of writings (1,283,267 words). 499 hours of podcasts and 1369 hours of livestreams. 14 software product releases (with our great team). Oh, and a bunch of big—and beautiful—ideas and results.

\n

It’s been wonderful. And unexpected. I’ve spent my life alternating between technology and basic science, progressively building a taller and taller tower of practical capabilities and intellectual concepts (and sharing what I’ve done with the world). Five years ago everything was going well, and making steady progress. But then there were the questions I never got to. Over the years I’d come up with a certain number of big questions. And some of them, within a few years, I’d answered. But others I never managed to get around to.

\n

And five years ago, as I explained in my birthday livestream, I began to think “it’s now or never”. I had no idea how hard the questions were. Yes, I’d spent a lifetime building up tools and knowledge. But would they be enough? Or were the questions just not for our time, but only perhaps for some future century?

\n

At several points before in my life I’d faced such issues—and things had worked out well (A New Kind of Science, Wolfram|Alpha, etc.). And from this, I had gotten a certain confidence about what might be possible. In addition, as a serious student of intellectual history, I had a sense of what kind of boldness was needed. Five years ago there wasn’t really anything that made me need to do something big and new. But I thought: “What the heck. I might as well try. I’ll never know what’s possible unless I try.”

\n

A major theme of my work since the early 1980s had been exploring the consequences of simple computational rules. And I had found the surprising result that even extremely simple rules could lead to immensely complex behavior. So what about the universe? Could it be that at a fundamental level our whole universe is just following some simple computational rule?

\n

I had begun my career in the 1970s as a teenager studying the frontiers of existing physics. And at first I couldn’t see how computational rules could connect to what is known in physics. But in the early 1990s I had an idea, and by the late 1990s I had developed it and gotten some very suggestive results. But when I published these in A New Kind of Science in 2002, even my friends in the physics community didn’t seem to care—and I decided to concentrate my efforts elsewhere (e.g. building Wolfram|Alpha, Wolfram Language, etc.).

\n

But I didn’t stop thinking “one day I need to get back to my physics project”. And in 2019 I decided: “What the heck. Let’s try it now.” It helped that I’d made a piece of technical progress the year before, and that now two young physicists were enthusiastic to work with me on the project.

\n

And so it was, soon after my birthday in 2019, that we embarked on our Physics Project. It was a mixture of computer experiments and big concepts. But before the end of 2019 it was clear: it was going to work! It was an amazing experience. Thing after thing in physics that had always been mysterious I suddenly understood. And it was beautiful—a theory of such strength built on a structure of such incredible simplicity and elegance.

\n

We announced what we’d figured out in April 2020, right when the pandemic was in full swing. There was still much to do (and there still is today). But the overall picture was clear. I later learned that a century earlier many well-known physicists were beginning to think in a similar direction (matter is discrete, light is discrete; space must be too) but back then they hadn’t had the computational paradigm or the other tools needed to move this forward. And now the responsibility had fallen on us to do this. (Pleasantly enough, given our framework, many modern areas of mathematical physics seemed to fit right in.)

\n

And, yes, figuring out the basic “machine code” for our universe was of course pretty exciting. But seeing an old idea of mine blossom like this had another very big effect on me. It made me think: “OK, what about all those other projects I’ve been meaning to do? Maybe it’s time to do those too.”

\n

And something else had happened as well. In doing the Physics Project we’d developed a new way of thinking about things—not just computational, but “multicomputational”. And actually, the core ideas behind this were in A New Kind of Science too. But somehow I’d never taken them seriously enough before, and never extended my intuition to encompass them. But now with the Physics Project I was doing this. And I could see that the ideas could also go much further.

\n

So, yes, I had a new and powerful conceptual framework for doing science. And I had all the technology of the modern Wolfram Language. But in 2020 I had another thing too—in effect, a new distribution channel for my ideas and efforts. Early in my career I had used academic papers as my “channel” (at one point in 1979 even averaging a paper every few weeks). But in the late 1980s I had a very different kind of channel: embodying my ideas in the design and implementation of Mathematica and what’s now the Wolfram Language. Then in the 1990s I had another channel: putting everything together into what became my book A New Kind of Science.

\n

After that was published in 2002 I would occasionally write small posts—for the community site around the science in my book, for our corporate blog, etc. And in 2010 I started my own blog. At first I mostly just wrote small, fun pieces. But by 2015—partly driven by telling historical stories (200th anniversary of George Boole, 200th anniversary of Ada Lovelace, …)—the things I was writing were getting ever meatier. (There’d actually already been some meaty ones about personal analytics in 2012.)

\n

And by 2020 my pattern was set and I would routinely write 50+ -page pieces, full of pictures (all with immediately runnable “click-to-copy” code) and intended for anyone who cared to read them. Finally I had a good channel again. And I started using it. As I’d found over the years—whether with language documentation or with A New Kind of Science—the very act of exposition was a critical part of organizing and developing my ideas.

\n

And now I started producing pieces. Some were directly about specific topics around the Physics Project. But within two months I was already writing about a “spinoff”: “Exploring Rulial Space: The Case of Turing Machines”. I had realized that one of the places the ideas of the Physics Project should apply was to the foundations of mathematics, and to metamathematics. In a footnote to A New Kind of Science I had introduced the idea of “empirical metamathematics”. And in the summer of 2020, fuelled by my newfound “finish those old projects” mindset, I ended up writing an 80-page piece on “The Empirical Metamathematics of Euclid and Beyond”.

\n

December 7, 1920 was the date a certain Moses Schönfinkel introduced what we now call combinators: the very first clear foundations for universal computation. I had always found combinators interesting (if hard to understand). I had used ideas from them back around 1980 in the predecessor of what’s now the Wolfram Language. And I had talked about them a bit in A New Kind of Science. But as the centenary approached, I decided to make a more definitive study, in particular using methods from the Physics Project. And, for good measure, even in the middle of the pandemic I tracked down the mysterious history of Moses Schönfinkel.

\n

In March 2021, there was another centenary, this time of Emil Post’s tag system, and again I decided to finish what I’d started in A New Kind of Science, and write a definitive piece, this time running to about 75 pages.

\n

One might have thought that the excursions into empirical metamathematics, combinators, tag systems, rulial and multiway Turing machines would be distractions. But they were not. Instead, they just deepened my understanding and intuition for the new ideas and methods that had come out of the Physics Project. As well as finishing projects that I’d wondered about for decades (and the world had had open for a century).

\n

Perhaps not surprisingly given its fundamental nature, the Physics Project also engaged with some deep philosophical issues. People would ask me about them with some regularity. And in March 2021 I started writing a bit about them, beginning with a piece on consciousness. The next month I wrote “Why Does the Universe Exist? Some Perspectives from Our Physics Project”. (This piece of writing happened to coincide with the few days in my life when I’ve needed to do active cryptocurrency trading—so I was in the amusing position of thinking about a philosophical question about as deep as they come, interspersed with making cryptocurrency trades.)

\n

Everything kept weaving together. These philosophical questions made me internalize just how important the nature of the observer is in our Physics Project. Meanwhile I started thinking about the relationship of methods from the Physics Project to distributed computing, and to economics. And in May 2021 that intersected with practical blockchain questions, which caused me to write about “The Problem of Distributed Consensus”—which would soon show up again in the science and philosophy of observers.

\n

The fall of 2021 involved really leaning into the new multicomputational paradigm, among other things giving a long list of where it might apply: metamathematics, chemistry, molecular biology, evolutionary biology, neuroscience, immunology, linguistics, economics, machine learning, distributed computing. And, yes, in a sense this was my “to do” list. In many ways, half the battle was just defining this. And I’m happy to say that just three years later, we’ve already made a big dent in it.

\n

While all of this was going on, I was also energetically pursuing my “day job” as CEO of Wolfram Research. Version 12.1 of the Wolfram Language had come out less than a month before the Physics Project was announced. Version 12.2 right after the combinator centenary. And in 2021 there were two new versions. In all 635 new functions, all of which I had carefully reviewed, and many of which I’d been deeply involved in designing.

\n

It’s a pattern in the history of science (as well as technology): some new methodology or some new paradigm is introduced. And suddenly vast new areas are opened up. And there’s lots of juicy “low-hanging fruit” to be picked. Well, that’s what had happened with the ideas from our Physics Project, and the concept of multicomputation. There were many directions to go, and many people wanting to get involved. And in 2021 it was becoming clear that something organizational had to be done: this wasn’t a job for a company (even for one as terrific and innovative as ours is), it was a job for something like an institute. (And, yes, in 2022, we indeed launched what’s now the Wolfram Institute for Computational Foundations of Science.)

\n

But back in 1986, I had started the very first institute concentrating on complexity and how it could arise from simple rules. Running it hadn’t been a good fit for me back then, and very quickly I started our company. In 2002, when A New Kind of Science was published, I’d thought again about starting an institute. But it didn’t happen. But now there really seemed to be no choice. I started reflecting on what had happened to “complexity”, and whether there was something to leverage from the institutional structure that had grown up around it. Nearly 20 years after the publication of A New Kind of Science, what should “complexity” be now?

\n

I wrote “Charting a Course for ‘Complexity’: Metamodeling, Ruliology and More”—and in doing so, finally invented a word for the “pure basic science of what simple rules do”: ruliology.

\n

My original framing of what became our Physics Project had been to try to “find a computational rule that gives our universe”. But I’d always found this unsatisfying. Because even if we had the rule, we’d still be left asking “why this one, and not another?” But in 2020 there’d been a dawning awareness of a possible answer.

\n

Our Physics Project is based on the idea of applying rules to abstract hypergraphs that represent space and everything in it. But given a particular rule, there are in general many ways it can be applied. And a key idea in our Physics Project is that somehow it’s always applied in all these ways—leading to many separate threads of history, that branch and merge—and, importantly, giving us a way to understand quantum mechanics.

\n

We talked about these different threads of history corresponding to different places in branchial space—and about how the laws of quantum mechanics are the direct analogs in branchial space (or branchtime) of the laws of classical mechanics (and gravity) in physical space (or spacetime). But what if instead of just applying a given rule in all possible ways, we applied all possible rules in all possible ways?

\n

What would one get? In November 2021 I came up with a name for it: the ruliad. A year and a half earlier I’d already been starting to talk about rulial space—and the idea of us as observers perceiving the universe in terms of our particular sampling of rulial space. But naming the ruliad really helped to crystallize the concept. And I began to realize that I had come upon a breathtakingly broad intellectual arc.

\n

The ruliad is the biggest computational thing there can be: it’s the entangled limit of all possible computations. It’s abstract and it’s unique—and it’s as inevitable in its structure as 2 + 2 = 4. It encompasses everything computational—including us. So what then is physics? Well, it’s a description of how observers like us embedded in the ruliad perceive the ruliad.

\n

Back in 1984 I’d introduced what I saw as being the very central concept of computational irreducibility: the idea that there are many computational processes whose outcomes can be found only by following them step by step—with no possibility of doing what mathematical science was used to, and being able to “jump ahead” and make predictions without going through each step. At the beginning of the 1990s, when I began to work on A New Kind of Science, I’d invented the Principle of Computational Equivalence—the idea that systems whose behavior isn’t obviously simple will always tend to be equivalent in the sophistication of the computations they do.

\n

Given the Principle of Computational Equivalence, computational irreducibility was inevitable. It followed from the fact that the observer could only be as computationally sophisticated as the system they were observing, and so would never be able to “jump ahead” and shortcut the computation. There’d come to be a belief that eventually science would always let one predict (and control) things. But here—from inside science—was a fundamental limitation on the power of science. All these things I’d known in some form since the 1980s, and with clarity since the 1990s.

\n

But the ruliad took things to another level. For now I could see that the very laws of physics we know were determined by the way we are as observers. I’d always imagined that the laws of physics just are the way they are. But now I realized that we could potentially derive them from the inevitable structure of the ruliad, and very basic features of what we’re like as observers.

\n

I hadn’t seen this philosophical twist coming. But somehow it immediately made sense. We weren’t getting our laws of physics from nothing; we were getting them from being the way we are. Two things seemed to be critical: that as observers we are computationally bounded, and that (somewhat relatedly) we believe we are persistent in time (i.e. we have a continuing thread of experience through time).

\n

But even as I was homing in on the idea of the ruliad as it applied to physics, I was also thinking about another application: the foundations of mathematics. I’d been interested in the foundations of mathematics for a very long time; in fact, in the design of Mathematica (and what’s now the Wolfram Language) and its predecessor SMP, I’d made central use of ideas that I’d developed from thinking about the foundations of mathematics. And in A New Kind of Science, I’d included a long section on the foundations of mathematics, discussing things like the network of all possible theorems, and the space of all possible axiom systems.

\n

But now I was developing a clearer picture. The ruliad represented not only all possible physics, but also all possible mathematics. And the actual mathematics that we perceive—like the actual physics—would be determined by our nature as observers, in this case mathematical observers. There were lots of technical details, and it wasn’t until March 2022 that I published “The Physicalization of Metamathematics and Its Implications for the Foundations of Mathematics”.

\n

In some ways this finished what I’d started in the mid-1990s. But it went much further than I expected, in particular in providing a sweeping unification of the foundations of physics and mathematics. It talked about what the ultimate limit of mathematics would be like. And it talked about how “human-level mathematics”—where we can discuss things like the Pythagorean theorem rather than just the microdetails of underlying axioms—emerges for observers like us just like our human-level impression of physical space emerges from the underlying network of atoms of space.

\n

One of the things I’d discovered in computational systems is how common computational irreducibility is, along with undecidability. And I had always wondered why undecidability wasn’t more common in typical mathematics. But now I had an answer: it just isn’t what mathematical observers like us “see” in the ruliad. At some level, this was a very philosophical result. But for me it also had practical implications, notably greatly validating the idea of using higher-level computational language to represent useful human-level mathematics, rather than trying to drill down to “axiomatic machine code”.

\n

October 22, 2021 had marked a third of a century of Mathematica. And May 14, 2022 was the 20th anniversary of A New Kind of Science. And in contextualizing my activities, and planning for the future, I’ve increasingly found it useful to reflect on what I’ve done before, and how it’s worked out. And in both these cases I could see that seeds I’d planted many years earlier had blossomed, sometimes in ways I’d suspected they might, and sometimes in ways that far exceeded what I’d imagined.

\n

What had I done right? The key, it seemed, was drilling down to find the essence of things, and then developing that. Even if I hadn’t been able to imagine quite what could be built on them, I’d been able to construct solid foundations, that successfully encapsulated things in the cleanest and simplest ways.

\n

In talking about observers and the ruliad—and in fact our Physics Project in general—I kept on making analogies to the way that the gas laws and fluid dynamics emerge from the complicated underlying dynamics of molecules. And at the core of this is the Second Law of thermodynamics.

\n

Well, as it happens, the very first foundational question in physics that I ever seriously studied was the origin of the Second Law. But that was when I was 12 years old, in 1972. For more than a century the Second Law had been quite mysterious. But when I discovered computational irreducibility in 1984 I soon realized that it might be the key to the Second Law. And in the summer of 2022—armed with a new perspective on the importance of observers—I decided I’d better once and for all write down how the Second Law works.

\n

Once again, there were lots of technical details. And as a way to check my ideas I decided to go back and try to untangle the rather confused 150-year history of the Second Law. It was an interesting exercise, satisfying for seeing how my new ways of thinking clarified things, but cautionary in seeing how wrong turns had been taken—and solidified—in the past. But in the end, there it was: the Second Law was a consequence of the interplay between underlying computational irreducibility, and our limitations as observers.

\n

It had taken half a century, but finally I had finished the project I’d started when I was 12 years old. I was on a roll finishing things. But I was also realizing that a bigger structure than I’d ever imagined was emerging. The Second Law project completed what I think is the most beautiful thing I’ve ever discovered. That all three of the core theories of twentieth century physics—general relativity, quantum mechanics and the Second Law (statistical mechanics)—have the same origin: the interplay between the underlying computational structure of the ruliad, and our characteristics and limitations as observers.

\n

And I knew it didn’t stop there. I’d already applied the same kind of thinking to the foundations of mathematics. And I was ready to start applying it to all sorts of deep questions in science, in philosophy, and beyond. But at the end of 2022, just as I was finishing my pieces about the Second Law, there was a surprise: ChatGPT.

\n

I’d been following AI and neural nets for decades. I first simulated a neural net in 1981. My first company, started in 1981, had, to my chagrin, been labeled an “AI company”. And from the early 2010s we’d integrated neural nets into the Wolfram Language. But—like the creators of ChatGPT—I didn’t expect the capabilities that emerged in ChatGPT. And as soon as I saw ChatGPT I started trying to understand it. What was it really doing? What would its capabilities be?

\n

In the world at large, there was a sense of shock: if AI can do this now, soon it’ll be able to do everything. But I immediately thought about computational irreducibility. And it gave us limitations. But those limitations would inevitably apply to AIs as well. There would be things that couldn’t be “quickly figured out by pure thought”—by humans and AIs alike. And, by the way, I’d just spent four decades building a way to represent things computationally, and actually do systematic computations on them—because that was the point of the Wolfram Language.

\n

So immediately I could see we were in a very interesting position. The Wolfram Language had the completely unique mission of creating a full-scale computational language. And now this was a crucial tool for AIs. The AIs could provide a very interesting and useful broad linguistic interface. But when it came to solid computation, they were—like humans—going to need a tool. Conveniently, Wolfram|Alpha already communicated in natural language. And it took only a few weeks to hook up Wolfram|Alpha—and Wolfram Language—to ChatGPT. We’d given “computational superpowers” to the AI.

\n

ChatGPT was everywhere. And people kept asking me about it. And over and over again I ended up explaining things about it. So at the beginning of February 2023 I decided it’d be better for me just to write down once and for all what I knew. It took a little over a week (yes, I’m a fast writer)—and then I had an “explainer” (that ran altogether to 76 pages) of ChatGPT.

\n

Partly it talked in general about how machine learning and neural nets work, and how ChatGPT in particular works. But what a lot of people wanted to know was not “how” but “why” ChatGPT works. Why was something like that possible? Well, in effect ChatGPT was showing us a new science discovery—about language. Everyone knows that there’s a certain syntactic grammar of language—like that, in English, sentences typically have the form noun-verb-noun. But what ChatGPT was showing us is that there’s also a semantic grammar—some pattern of rules for what words can be put together and make sense.

\n

I’ve thought about the foundations of language for a long time (which isn’t too surprising, given the four decades I’ve spent as a computational language designer). So in effect I was well primed to think about its interaction with ChatGPT. And it also helped that—as I’ll talk about below—one of my long-unfinished projects is precisely on a formal framework for capturing meaning that I call “symbolic discourse language”.

\n

In technology and other things I always like best situations where basically nothing is known, and one has to invent everything from scratch. And that’s what was happening for functionality based on LLMs in the middle of 2023. How would LLM-based Wolfram Language functions work? How would a prompt repository work? How would LLMs interact with notebooks?

\n

Meanwhile, there was still lots of foment in the world about the “AI shock”. Before the arrival of the Physics Project in 2019—I’d been quite involved in AI philosophy, AI ethics, etc. And in March 2023 I wrote a piece on “Will AIs Take All Our Jobs and End Human History—or Not?” In the end—after all sorts of philosophical arguments, and an analysis of actual historical data—the answer was: “It’s Complicated”. But along the way computational irreducibility and the ruliad were central elements: limiting the controllability of AIs, allowing for an infinite frontier of invention, and highlighting the inevitable meaninglessness of everything in the absence of human choice.

\n

By this point (and actually, with remarkable speed) my explainer on ChatGPT had turned into a book—that proved extremely popular (and now, for example, exists in over 10 languages). It was nice that people found the book useful—and perhaps it helped remove some of the alarming mystique of AI. But I couldn’t help noticing that of all the many things I’d written, this had been one of the fastest to write, yet it was garnering one of the largest readerships.

\n

One might have imagined that AI was pretty far from our Physics Project, the ruliad, etc. But actually it soon became clear that there were close connections, and that there were things to learn in both directions. In particular, I’d come to think of minds that work in different ways as occupying different positions in the ruliad. But how could one get intuition about what such minds would experience—or observe? Well, I realized, one could just look at generative AI. In July I wrote “Generative AI Space and the Mental Imagery of Alien Minds”. I called this the “cats in hats piece”, because, yes, it has lots of pictures of (often bizarrely distorted) cats (in hats)—used as examples of what happens if one moves a mind around in rulial space. But despite the whimsy of the cats, this piece provided a surprisingly useful window into what for me has been a very longstanding question of how other minds might perceive things.

\n

And this fed quite directly into my piece on “Observer Theory” in December 2023. Ever since things like Turing machines we’ve had a formal model for the process of computation. My goal was to do the same kind of thing for the process of observation. In a sense, computation constructs sequences of new things, say with time. Observation, on the other hand, equivalences things together, so they fit in finite minds. And just what equivalencing is done—by our senses, our measuring devices, our thinking—determines what our ultimate perceptions will be. Or, put another way, if we can characterize well enough what we’re like as observers, it’ll show us how we sample the ruliad, and what we’ll perceive the laws of physics to be.

\n

When I started the Physics Project I wasn’t counting on it having any applications for hundreds of years. But quite soon it became clear that actually there were going to be all sorts of near-term applications, particularly of the formalism of multicomputation. And every time one used that formalism one could get more intuition about features of the Physics Project, particularly related to quantum mechanics. I ended up writing a variety of “ruliological” pieces, all, as it happens, expanding on footnotes in A New Kind of Science. There was “Multicomputation with Numbers” (October 2021), “Games and Puzzles as Multicomputational Systems” (June 2022) and “Aggregation and Tiling as Multicomputational Processes” (November 2023). And in September 2023 there was also “Expression Evaluation and Fundamental Physics”.

\n

Back around 1980—when I was working on SMP—I’d become interested in the theory of expression evaluation. And finally, now, with the Physics Project—and my work on combinators and metamathematics—four decades later I had a principled way to study it (potentially with immediate application in distributed computing and computational language design around that). And I could check off progress on another long-pending project.

\n

I give many talks, and do many podcasts and livestreams—essentially all unprepared. But in October 2023 I agreed to give a TED talk. And I just didn’t see any way to fit a reasonable snapshot of my activities into 18 minutes without preparation. How was I to coherently explain the Physics Project, the ruliad and computational language in such a short time? I called the talk “How to Think Computationally about AI, the Universe and Everything”. And I began with what for me was a new condensation: “Human language. Mathematics. Logic. These are all ways to formalize the world. And in our century there’s a new and yet more powerful one: computation.”

\n

Over the years I’d done all sorts of seemingly very different projects in science and in technology. But somehow it seemed like they were now all converging. Back in 1979, for example, I’d invented the idea of transformations for symbolic expressions as a foundation for computational language. But now—more than four decades later—our Physics Project was saying that those kinds of transformations (specifically on hypergraphs) were just what the “machine code of the universe” was made of.

\n

Since the 1980s I’d thought that computation was a useful paradigm with which to think about the world. But now our Physics Project and the ruliad were saying that it wasn’t just useful; it was the underlying paradigm of the world. For some time I’d been viewing our whole Wolfram Language effort as a way to provide a way to formalize computation for the purposes of both humans and machines. Four hundred years ago mathematical notation had streamlined mathematical thinking, allowing what became the mathematical sciences to develop. I saw what we were doing with our computational language as a way to streamline computational thinking, and allow “computational X” for all fields “X” to develop.

\n

I began to see computational thinking as a way to “humanize” the ruliad; to pick out those parts that are meaningful to humans. And I began to see computational language as the bridge between the power of raw computation, and the kinds of things we humans think about.

\n

But how did AI fit in? At the beginning of 2024, lots of people were still asking in effect “Can AI Solve Science?” So I decided to analyze that. I certainly didn’t expect AI to be able to “break computational irreducibility”. And it didn’t. Yes, it could automate much of what humans could do in a quick look. But formalized, irreducible computation: that was going to need computational language, not AI.

\n

It’s easy to be original in the computational universe: if you pick a rule at random, it’s overwhelmingly likely nobody’s ever looked at it before. But will anyone care? They’ll care if in effect that part of the ruliad has been “colonized”; if there’s already a human connection to it. But what if you define some attribute that you want, then just “search out there” for a rule that exhibits it? That’s basically what biological evolution—or machine learning training—seems to do.

\n

And as a kind of off-hand note I decided to just see if I could make a minimal model for that. I’d tried before—in the mid-1980s. And in the 1990s when I was writing A New Kind of Science I’d become convinced that computational irreducibility was in a sense a stronger force than adaptive evolution, and that when complex behavior was seen in biology, it was computational irreducibility that should take most of the credit.

\n

But I decided to just do the experiment and see. And although computational irreducibility in a sense tells one to always “expect the unexpected”, in all these years I’ve never fully come to terms with that—and I’m still regularly surprised by what simple systems somehow “cleverly” manage to do. And so it was with my minimal model of biological evolution.

\n

I’d always wondered why biological evolution managed to work at all, why it didn’t “get stuck”, and how it managed to come up with the ornate “solutions” it did. Well, now I knew: and it turned out it was, once again, a story of computational irreducibility. And I’d managed to finish another project that I started in the 1980s.

\n

But then there was machine learning. And despite all the energy around it—as well as practical experience with it—it didn’t seem like there was a good foundational understanding of what it was doing or why it worked. For a couple of years I’d been asking all the machine learning experts I ran into what they knew. But mostly they confirmed that, yes, it wasn’t well understood. And in fact several of them suggested that I’d be the best person to figure it out.

\n

So just a few weeks ago, starting with ideas from the biological evolution project, and mixing in some things I tried back in 1985, I decided to embark on exploring minimal models of machine learning. I just posted the results last week. And, yes, one seems to be able to see the essence of machine learning in systems vastly simpler than neural nets. In these systems one can visualize what’s going on—and it’s basically a story of finding ways to put together lumps of irreducible computation to do the tasks we want. Like stones one might pick up off the ground to put together into a stone wall, one gets something that works, but there’s no reason for there to be any understandable structure to it.

\n

Like so many of the projects I’ve done in the past five years, I could in principle have done this project much earlier—even in the 1980s. But back then I didn’t have the intuition, the tools or the intellectual confidence to actually dive in and get the project done. And what’s been particularly exciting over the past five years is that I can feel—and even very tangibly see—how what I can do has grown. With every project I’ve done I’ve further honed my intuition, developed more tools (both conceptual and practical), and built my intellectual confidence. Could I have gotten here earlier in my life? I don’t think so. I think to get to where I am now required the kind of journey I’ve taken through science, technology and the other things I’ve done. A living example of the phenomenon of computational irreducibility.

\n

The Process of Getting Things Done

\n

I started my career young—and usually found myself the “youngest person in the room”. But shockingly fast all those years whizzed by, and now I’m usually the “oldest person in the room”. But somehow I always still seem to feel like a young whippersnapper—not settled into some expected pattern, and “pushing for the future”.

\n

I’ve always done projects that are hard. Projects that many people thought were impossible. Projects that stretched my capabilities to the limit. And to do this has required a certain mixture of confidence and humility. Confidence that it’s worth me trying the project. Humility in not assuming that it’ll be easy for me.

\n

I’ve learned a lot of fields by now, and with them a lot of different ways of thinking. But somehow it’s never enough to make the projects I do easy. Somehow the projects are always far enough out on the frontier that I have to learn new things and new ways of thinking to succeed at them. And so there I am, often the only person in the room whose project isn’t somehow easy for them. And who still has to be pushing, whippersnapper style.

\n

At this point, a fair fraction of the projects I do are ones that I’ve thought about for a long time; a smaller fraction are opportunistic—coming into scope just now as a result of something I’ve done, or something that’s happened in the world at large. Before the past five years I had a lot of projects that had languished, often for decades. Yes, I thought they would be interesting, and I gradually collected information about them. But somehow I wasn’t quite in a place to tackle them.

\n

But now I feel quite differently. In the past five years, I’ve gone back and finished a fair fraction of all those languishing projects. And it’s been great. Without exception, the projects turned out to be richer and more interesting than I expected. Often I realized I really couldn’t have done them without the tools and ideas (and infrastructure) I now have. And—often to my great surprise—the projects turned out to have very direct connections to big themes around the ruliad, the Physics Project and, for that matter, computational language.

\n

Why was this happening? Partly it’s a tribute to the breadth of the computational (and now multicomputational) paradigm. But partly it has to do with the specific character of projects I was choosing—always seeking what seemed like the simplest, most foundational versions of things.

\n

I’ve done quite a few big projects in my life, many seemingly very different. But as I look back, I realize that all my projects have a certain overall pattern to them. They’re all about taking something that seems complicated, then drilling down to find the foundations of what’s going on, and then building up from these—often with considerable engineering-style effort. And the methods and tools I’ve developed have in a sense implicitly been optimized for this pattern of work.

\n

I suppose one gets used to the rhythm of it all. The time when one’s drilling down, slowly trying to understand things. The time when one’s doing all the work to build the big structure up. And yes, it’s all hard. But by now I know the signs of progress, and they’re always energizing to see.

\n

At any given time, I’ll have many projects gestating—often for years or decades. But once a project becomes active, it’s usually the only one I’m working on. And I’ll work on it with great intensity, pushing hard to keep going until it’s done. Often I’ll be working with other people, usually much younger than me. And I think it’s always a surprise that I’ll routinely be the one who works with the greatest intensity—every day, at all hours.

\n

I think I’m pretty efficient too. Of course, it helps that I have a tool—Wolfram Language—that I’ve been building for decades to support me. And it helps that I’ve developed all kinds of practices around how I organize code and notebooks I create, and how I set up my process of writing about things. Of course, it also helps that I have very capable people around me to make suggestions, explore additional directions, fill in details, check things, and get my write-ups produced and published.

\n

As I have written about elsewhere, my life is in many ways set up to be quite simple and routine. I get up at the same time every day, eat the same thing for breakfast, and so on. But in a sense this frees me to concentrate on the intellectual things I’m doing—which are different every day, often in unexpected ways.

\n

But how is it that I even get the time to do all these intellectual things? After all, I am—as I have been for the past 38 years—the CEO of a very active tech company. Two things I think help (in addition, of course, to the fact that I have such a great long-term team at the company). First, organization. And second, resolve. Every day I’ll have tightly scheduled meetings over the course of the working day. (And there are lots of details to this. I get up in the late morning, then do my first two meetings while walking, and so on.) But somehow—mostly on evenings and weekends—I find time to work intensely on my intellectual projects.

\n

It’s not as if I ignore everything else in the world. But I do have a certain drive—and resolve—that fills any time available with my projects, and somehow seems to succeed in getting them done. (And, yes, there are many optimizations in the details of my life, saving me all sorts of time. And it probably helps that I’ve been a work-from-home CEO now for 33 years.)

\n

One might have thought that CEOing would greatly detract from being able to do intellectual work. But I find the exact opposite. Because in my experience the discipline of strategy and decision making (as well as communicating thoughts and ideas to other people) that comes with CEOing is critical to being able to do incisive intellectual work. And, by the way, the kind of thinking that goes with intellectual work is also incredibly valuable in being an effective CEO.

\n

There’s another critical part to my “formula”. And that has to do with exposition. For me, the exposition of a project is an integral part of the project. Part of it is that the very definition of the question is often one of the most important parts of a project. But more than that, it’s through exposition that I find I really understand things. It takes a certain discipline. It can be easy enough to make some highfalutin technical statement. But can one grind it down into truly simple pieces that one can immediately understand? Yes, that means other people will be able to understand it too. But for me, what’s critical is that that’s the way I can tell if I’m getting things right. And for me the exposition is what in the end defines the backbone of a project.

\n

Normally I write quickly, and basically without revision. But whenever there’s a piece I’m finding unduly hard to write I know that’s where I’m muddled, and need to go back and understand what’s going on. Some of my projects (like creating this piece, for example) end up being essentially “pure writing”. But most are deeply computational—and full of computer experiments. And just as I put a lot of effort into making written exposition clear, I do the same for computational language, and for pictures. Indeed, many of my projects are in large measure driven by pictures. Usually these are what one can think of as “algorithmic diagrams”—created automatically with a structure optimized for exposition.

\n

And the pictures aren’t just useful for presenting what I’ve done; they’re also critical to my own efforts to figure things out. And I’ve learned that it’s important to get the presentational details of pictures right as early as possible in a project—to give myself the best chance to notice things.

\n

Often the projects I do require exploring large numbers of possible systems. And somehow with great regularity this leads to me ending up looking at large arrays of little pictures. Yes, there’s a lot of “looking” that can be automated. But in the end computational irreducibility means there’ll always be the unexpected, that I basically have to see for myself.

\n

A great thing about the Wolfram Language is that it’s been very stable ever since it was first released. And that means that I can take notebooks even from the 1980s and immediately run them today. And, yes, given all the “old” projects I’ve worked on in the past five years, that’s been very important.

\n

But in addition to being very stable, the Wolfram Language is also very self contained—and very much intended to be readable by humans. And the result is something that I’ve found increasingly important: every computational picture in everything I write has Wolfram Language code “behind it”, that you can get by clicking. All the time I find myself going back to previous things I’ve written, and picking up click-to-copy code to run for some new case, or use as the basis for something new I’m doing.

\n

And of course that click-to-copy code is open for anyone to use. Not only for its “computational content”, but also for the often-elaborate visuals it implements.

\n

Most of my writings over the past five years have been about new basic science. But interspersed with this—along with pieces about technology and about philosophy—are pieces about history. And in fact many of my scientific pieces have had extensive historical sections as well.

\n

Why do I put such effort into history? Partly I just find it fun to figure out. But mostly it’s to contextualize my understanding of things. Particularly in the past five years I’ve ended up working on a whole sequence of projects that are in a sense about changing longstanding directions in science. And to feel confident about making such changes, one has to know why people went in those directions in the first place. And that requires studying history.

\n

Make no mistake: history—or at least good history—is hard. Often there’ll be a standard simple story about how some discovery was suddenly made, or how some direction was immediately defined. But the real story is usually much more complicated—and much more revealing of the true intellectual foundations of what was figured out. Almost never did someone discover something “one day”; almost always it took many years to build up the conceptual framework so that “one day” the key thing could even be noticed.

\n

When I do history I always make a big effort to look at the original documents. And often I realize that’s critical—because it’s only with whatever new understanding I’ve developed that one would stand a chance of correctly interpreting what’s in the documents. And even if one’s mainly interested in the history of ideas, I’ve always found it’s crucial to also understand the people who were involved with them. What was their motivation? What was their practical situation? What kinds of things did they know about? What was their intellectual style in thinking about things?

\n

It has helped me greatly that I’ve had my own experiences in making discoveries—that gives me an intuition for how the process of discovery works. And it also helps that I’ve had my fair share of “worldly” experiences. Still, often it’s at first a mystery how some idea developed or some discovery got made. But my consistent experience is that with enough effort one can almost always solve it.

\n

Particularly for the projects I’ve done in recent years, it often leaves me with a strange feeling of connection. For in many cases I find out that the things I’ve now done can be viewed as direct follow-ons to ideas that were thought about a century or more ago, and for one reason or another ignored or abandoned since.

\n

And I’m then usually left with a strong sense of responsibility. An idea that was someone’s great achievement had been buried and lost to the world. But now I have found it again, and it rests on me to bring it into the future.

\n

In addition to writing about “other people’s history”, I’ve also been writing quite a bit about my own history. And in the last few years I’ve made a point of explaining my personal history around the science—and technology—I describe. In doing this, it helps a lot that I have excellent personal archives—that routinely let me track to within minutes discoveries I made even four decades ago.

\n

My goal in describing my own history is to help other people contextualize things I write about. But I have to say that time and time again I’ve found the effort to piece together my own history extremely valuable just for me. As I go through life, I try to build up a repertoire of patterns for how things I do fit together. But often those patterns aren’t visible at the time. And it takes going back—often years later—to see them.

\n

I do the projects I do first and foremost for myself. But I’ve always liked the idea that other people can get their own pleasure and benefit from my projects. And—basically starting with the Physics Project—I’ve tried to open to the world not just the results of my projects, but the process by which they’re done.

\n

I post my working notebooks. Whenever practical I livestream my working meetings. And, perhaps taking things to an extreme, I record even my own solitary work, posting it in “video work logs”. (Except I just realized I forgot to record the writing I’m doing right now!)

\n

A couple of years before the Physics Project I actually also opened up my technology development activities—livestreaming our software design reviews, in the past five years 692 hours of them. (And, yes, I put a lot of work and effort into designing the Wolfram Language!)

\n

At the beginning of the pandemic I thought: “There are all these kids out of school. Let me try to do a little bit of public service and livestream something about science and technology for them.” And that’s how I started my “Science & Technology Q&A for Kids & Others” livestreams, that I’ve now been doing for four and a half years. Along the way, I’ve added “History of Science & Technology Q&A”, “Future of Science & Technology Q&A”, and “Business, Innovation & Managing Life Q&A”. Altogether I’ve done 272 hours of these, that have generated 376 podcast episodes.

\n

Twice a week I sit down in front of a camera, watch the feed of questions, and try to answer them. It’s always off the cuff, completely unprepared. And I find it a great experience. I can tell that over the time I’ve been doing this, I’ve become a better and more fluent explainer, which no doubt helps my written exposition too. Often in answering questions I’ll come up with a new way to explain something, that I’ve never thought of before. And often there’ll be questions that make me think about things I’ve never thought about at all before. Indeed, several of my recent projects actually got started as a result of questions people asked.

\n

When I was younger I always just wanted to get on with research, create things, and so on; I wasn’t interested in education. But as I’ve gotten older I’ve come to really like education. Partly it’s because I feel I learn a lot myself from it, but mostly it’s because I find it fulfilling to use what I know and try to help people develop.

\n

I’ve always been interested in people—a useful attribute in running a talent-rich company for four decades. (I’m particularly interested in how people develop through their lives—leading me recently, for example, to organize a 50-year reunion for my elementary school class.) I’ve had a long-time “hobby” of mentoring CEOs and kids (both being categories of people who tend to believe that anything is possible).

\n

But my main educational efforts are concentrated in a few weeks of the year when we do our Wolfram Summer School (started in 2003) and our Wolfram High School Summer Research Program (started in 2012). All the students in these programs (775 of them over the past five years) do an original project, and one of my jobs is to come up with what all these projects should be. Over the course of the year I’ll accumulate ideas—though rather often when I actually meet a student I’ll invent something new.

\n

I obviously do plenty of projects myself. But it’s always an interesting—and invigorating—experience to see so many projects get done with such intensity at our summer programs. Plus, I get lots of extra practice in framing projects that helps when I come to frame my own projects.

\n

At this point, I’ve spent years trying to organize my life to optimize it for what I want to get out of it. I need long stretches of time when I can concentrate coherently. But I like having a diversity of activities, and I’m pretty sure I wouldn’t have the energy and effectiveness I do without that. Over the years, I’ve added in little pieces. Like my weekly virtual sessions where I “do my homework” with a group of kids, working on something that I need to get done, but that doesn’t quite fit elsewhere. Or my weekly sessions with local kids, talking about things that make me and them think. Or, for that matter, my “call while driving” list of calls it’s good to make, but wouldn’t usually quite get the priority to happen.

\n

Doing all the things I do is hard work. But it’s what I want to do. Yes, things can drag from time to time. But at this point I’m so used to the rhythm of projects that I don’t think I notice much. And, yes, I work basically every hour of every day I can. Do I have hobbies? Well, back when I was an academic, business was my main “hobby”. When I started CEOing, science became a “hobby”. Writing. Education. Livestreaming. These were all “hobbies” too. But somehow one of the patterns of my life is that nothing really stays quite as a “true hobby”.

\n

What’s Next?

\n

The past five years have not only been my most productive ever, but they’ve also built more “productivity momentum” than I’ve had before. So, what’s next? I have a lot of projects currently “in motion”, or ready to “get into motion”. Then I have many more that are in gestation, for which the time may finally have come. But I know there’ll also be surprises: projects that suddenly occur to me, or that I suddenly realize are possible. And one of the great challenges is to be in a position to actually jump into such things.

\n

It has to be said that there’s always a potentially complicated tradeoff. To what extent should one “tend” the things one’s already done, and to what extent should one do new things? Of course, there are some things that are never “done”—like the Wolfram Language, which I started building 38 years ago, and still (energetically) work on every day. Or the Physics Project, where there’s just so much to figure out. But one of the things that’s worked well in most of the basic science projects I’ve done in the past five years or is that once I’ve written my piece about the project, I can usually consider the project “done for now”. It always takes a lot of effort to get a project to the point where I can write about it. But I work hard to make sure I only have to do it once; that I’ve “picked the low-hanging fruit”, so I don’t feel I have to come back “to add a little more”.

\n

I put a lot of effort into the pieces I write about my projects. And I also give talks, do interviews, etc. (about 500 altogether in the past five years). But I certainly don’t “market” my efforts as much as I could. It’s a decision I’ve made: that at this point in my life—particularly with the burst of productivity I’m experiencing—I want to spend as much of my time as possible doing new things. And so I need to count on others to follow up and spread knowledge about what I’ve done, whether in the academic world, on Wikipedia, the web, etc. (And, yes, pieces I write and the pictures they contain are set up to be immediately reproducible wherever appropriate.)

\n

OK, so what specific new things are currently in my pipeline? Well, there’s lots of science (and related intellectual things). And there’s also lots of technology. But let’s talk about science first.

\n

A big story is the Physics Project—where there’s a lot to be done, in many different directions. There’s foundational theory to be developed. And there are experimental implications to be found.

\n

It’d be great if we could find experimental evidence of the discreteness of space, or maximum entanglement speed, or a host of other unexpected phenomena in our models. A century or so ago it was something of a stroke of luck that atoms were big enough that they could be detected. And we don’t know if the discreteness of space is something we’ll be able to detect now—or only centuries from now.

\n

There are phenomena—particularly associated with black holes—that might effectively serve as powerful “spacetime microscopes”. And there are phenomena like dimension fluctuations that could potentially show up in a variety of astrophysical settings. But one direction I’m particularly interested in exploring is what one might call “spacetime heat”—the effect of detailed microscopic dynamics in the hypergraph that makes up spacetime. Could “dark matter”, for example, not be “matter” at all, but instead be associated with spacetime heat?

\n

Part of investigating this involves building practical simulation software to investigate our models on as large a scale as possible. And part of it involves “good, old-fashioned physics”, figuring out how to go from underlying foundational effects to observable phenomena.

\n

And there’s a foundational piece to this too. How does one set up mathematics—and mathematical physics—when one’s starting from a hypergraph? A traditional manifold is ultimately built up from Euclidean space. But what kind of object is the limit of a hypergraph? To understand this, we need to construct what I’m calling infrageometry—and infracalculus alongside it. Infrageometry—as its name suggests—starts from something lower level than traditional geometry. And the challenge is in effect to build a “21st century Euclid”, then Newton, etc.—eventually finding generalizations of things like differential geometry and algebraic topology that answer questions like what 3\"\"-dimensional curvature tensors are like, or how we might distinguish local gauge degrees of freedom from spatial ones in a limiting hypergraph.

\n

Another direction has to do with particles—like electrons. The fact is that existing quantum field theory in a sense only really deals with particles indirectly, by thinking of them as perturbations in a field—which in turn is full of (usually unobservable) zero-point fluctuations. In our models, the structure of everything—from spacetime up—is determined by the “fluctuating” structure of the underlying hypergraph (or, more accurately, by the whole multiway graph of “possible fluctuations”). And what this suggests is that there’s in a sense a much lower level version of the Feynman diagrams we use in quantum field theory and where we can discuss the “effect of particles” without ever having to say exactly what a particle “is”.

\n

I must say that I expected we’d have to know what particles were even to talk about energy. But it turned out there was a “bulk” way to do that. And maybe similarly there’s an indirect way to talk about interactions between particles. My guess is that in our model particles are structures a bit like black holes—but we may be able to go a very long way without having to know the details.

\n

One of the important features of our models is that quantum mechanics is “inevitable” in them. And one of the projects I’m hoping to do is to finally “really understand quantum mechanics”. In general terms, it’s connected to the way branching observers (like us) perceive branching universes. But how do we get intuition for this, and what effects can we expect? Several projects over the past years (like multiway Turing machines, multiway games, multiway aggregation, etc.) I’ve done in large part to bolster my intuition about branchial space and quantum mechanics.

\n

I first worked on quantum computers back in 1980. And at the time, I thought that the measurement process (whose mechanism isn’t described in the standard formalism of quantum mechanics) would be a big problem for them. Years have gone by, and enthusiasm for quantum computers has skyrocketed. In our models there’s a rather clear picture that inside a quantum computer there are “many threads of history” that can in effect do computations in parallel. But for an observer like us to “know what the answer is” we have to knit those threads together. And in our models (particularly with my observer theory efforts) we start to be able to see how that might happen, and what the limitations might be.

\n

Meanwhile, in the world at large there are all sorts of experimental quantum computers being built. But what are their limitations? I have a suspicion that there’s some as-yet-unknown fundamental physics associated with these limitations. It’s like building telescopes: you polish the mirror, and keep on making engineering tweaks. But unless you know about diffraction, you won’t understand why your resolution is limited. And I have a slight hope that even existing results on quantum computers may be enough to see limitations perhaps associated with maximum entanglement speed in our models. And the way our models work, knowing this speed, you can for example immediately deduce the discreteness scale of space.

\n

Back in 1982, I and another physicist wrote two papers on “Properties of the Vacuum”. Part 1 was mechanical properties. Part 2 was electrodynamic. We announced a part 3, on gravitational properties. But we never wrote it. Well, finally, it looks as if our Physics Project shows us how to think about such properties. So perhaps it’s time to finally write “Part 3”, and respond to all those people who sent preprint request cards for it four decades ago.

\n

One of the great conclusions of our Physics Project—and the concept of the ruliad—is that we have the laws of physics we do because we are observers of the kind we are. And just knowing very coarsely about us as observers seems to already imply the major laws of twentieth century physics. And to be able to say more, I think we need more characterization of us as observers. And my guess is, for example, that some feature of us that we probably consider completely obvious is what leads us to perceive space as (roughly) three dimensional. And indeed I increasingly suspect that the whole structure of our Physics Project can be derived—a bit like early derivations of special relativity—from certain axiomatic assumptions about our nature as observers, and fundamental features of computation.

\n

There’s plenty to do on our Physics Project, and I’m looking forward to making progress with all of it. But the ideas of the Physics Project—and multicomputation in general—apply to lots of other fields too. And I have many projects planned on these.

\n

Let’s talk first about chemistry. I never found chemistry interesting as a kid. But as we’ve added chemistry functionality in the Wolfram Language, I’ve understood more about it, and why it’s interesting. And I’ve also followed molecular computing since the 1980s. And now, largely inspired by thinking about multicomputation, I’ve become very interested in what one might call the foundations of chemistry. Actually, what I’m most interested in is what I’m calling “subchemistry”. I suppose one can think of it as having a similar kind of relation to chemistry as infrageometry has to geometry.

\n

In ordinary chemistry, one thinks about reactions between different species of molecules. And to calculate rates of reactions, one multiplies concentrations of different species, implicitly assuming that there’s perfect randomness in which specific molecules interact. But what if one goes to a lower level, and starts talking about the interactions not of species of molecules, but individual molecules? From our Physics Project we get the idea of making causal graphs that represent the causal relations between different specific interaction events.

\n

In a gas the assumption of molecular-level randomness will probably be pretty good. But even in a liquid it’ll be more questionable. And in more exotic materials it’ll be a completely different story. And I suspect that there are “subchemical” processes that can potentially be important, perhaps in a sense finding a new “slice of computational reducibility” within the general computational irreducibility associated with the Second Law.

\n

But the most important potential application of subchemistry is in biology. If we look at biological tissue, a basic question might be: “What phase of matter is it?” One of the major takeaways from molecular biology in the last few decades has been that in biological systems, molecules (or at least large ones) are basically never just “bouncing around randomly”. Instead, their motion is typically carefully orchestrated.

\n

So when we look at biological tissue—or a biological system—we’re basically seeing the result of “bulk orchestration”. But what are the laws of bulk orchestration? We don’t know. But I want to find out. I think the “mechanoidal phase” that I identified in studying the Second Law is potentially a good test case.

\n

If we look at a microprocessor, it’s not very useful to describe it as “containing a gas of electrons”. And similarly, it’s not useful to describe a biological cell as “being liquid inside”. But just what kind of theory is needed to have a more useful description we don’t know. And my guess is that there’ll be some new level of abstraction that’s needed to think about this (perhaps a bit like the new abstraction that was needed to formulate information theory).

\n

Biology is not big on theory. Yes, there’s natural selection. And there’s the digital nature of biomolecules. But mostly biology has ended up just accumulating vast amounts of data (using ever better instrumentation) without any overarching theory. But I suspect that in fact there’s another foundational theory to be found in biology. And if we find it, a lot of the data that’s been collected will suddenly fall into place.

\n

There’s the “frankly molecular” level of biology. And there’s the more “functional” level. And I was surprised recently to be able to find a very minimal model that seems to capture “functional” aspects of biological evolution. It’s a surprisingly rich model, and there’s much more to explore with it, notably about how different “ideas” get propagated and developed in the process of adaptive evolution—and what kinds of tree-of-life-style branchings occur.

\n

And then there’s the question of self replication—a core feature of biology. Just how simple a system can exhibit it in a “biologically relevant way”? I had thought that self replication was “just relevant for biology”. But in thinking about the problem of observers in the ruliad, I’ve come to realize that it’s also relevant at a foundational level there. It’s no good to just have one observer; you have to have a whole “rulial flock” of similar ones. And to get similar ones you need something like self replication.

\n

Talking of “societies of observers” brings me to another area I want to study: economics. How does a coherent economic system emerge from all the microscopic transactions and other events in a society? I suspect it’s a story that’s in the end similar to the theories we’ve studied in physics—from the emergence of bulk properties in fluids, to the emergence of continuum spacetime, and so on. But now in economics we’re dealing not with fluid density or metric, but instead with things like price. I don’t yet know how it will work out. Maybe computational reducibility will be associated with value. Maybe computational irreducibility will be what determines robustness of value. But I suspect that there’s a way of thinking about “economic observers” in the ruliad—and figuring out what “natural laws” they’ll “inevitably observe”. And maybe some of those natural laws will be relevant in thinking about the kind of questions we humans care about in economics.

\n

It’s rather amazing in how many different areas one seems to be able to apply the kind of approach that’s emerged from the Physics Project, the ruliad, etc. One that I’ve very recently tackled is machine learning. And in my effort to understand its foundations, I’ve ended up coming up with some very minimal models. My purpose was to understand the essence of machine learning. But—somewhat to my surprise—it looks as if these minimal models can actually be practical ways to do machine learning. Their hardware-level tradeoffs are somewhat different. But—given my interest in practical technology—I want to see if one can build out a practical machine-learning framework that’s based on these (fundamentally discrete) models.

\n

And while I’m not currently planning to investigate this myself, I suspect that the approach I’ve used to study machine learning can also be applied to neuroscience, and perhaps to linguistics. And, yes, there’ll probably be a lot of computational irreducibility in evidence. And once again one has to hope that the pockets of computational reducibility that exist will give rise to “natural laws” that are useful for what we care about in these fields.

\n

In addition to these “big” projects, I’m also hoping to do a variety of “smaller” projects. Many I started decades ago, and in fact mentioned in A New Kind of Science. But now I feel I have the tools, intuition and intellectual momentum to finally finish them. Nestedly recursive functions. Deterministic random tilings. Undecidability in the three-body problem. “Meta-engineering” in the Game of Life. These might on their own seem esoteric. But my repeated experience—particularly in the past five years—is that by solving problems like these one builds examples and intuition that have surprisingly broad application.

\n

And then there are history projects. Just what did happen to theories of discrete space in the early twentieth century (and how close did people like Einstein get to the ideas of our Physics Project)? What was “ancient history” of neural nets, and why did people come to assume they should be based on continuous real numbers? I fully expect that as I investigate these things, I’ll encounter all sorts of “if only” situations—where for example some unpublished note languishing in an archive (or attic) would have changed the course of science if it had seen the light of day long ago. And when I find something like this, it’s yet more motivation to actually finish those projects of mine that have been languishing so long in the filesystem of my computer.

\n

There’s a lot I want to do “down in the computational trenches”, in physics, chemistry, biology, economics, etc. But there are also things at a more abstract level in the ruliad. There’s more to study about metamathematics, and about how mathematics that we humans care about can emerge from the ruliad. And there are also foundational questions in computer science. P vs. NP, for example, can be formulated as an essentially geometric problem in the ruliad—and conceivably there are mathematical methods (say from higher category theory) that might give insight into it.

\n

Then there are questions about hyperruliads and hyporuliads. In a hyperruliad that’s based on hypercomputation, there will be hyperobservers. But is there a kind of “rulial relativity” that makes their perception of things just the same as “ordinary observers” in the ordinary ruliad? A way to get some insight into this may be to study hyporuliads—versions of the ruliad in which there are only limited levels of computation possible. A bit like the way a spacelike singularity associated with a black hole supports only limited time histories, or a decidable axiomatic theory supports only proofs of limited length, there will be limitations in the hyporuliad. And by studying them, there’s a possibility that we’ll be able to see more about issues like what kinds of mathematical axioms can be compatible with observers like us.

\n

It’s worth commenting that our Physics Project—and the ruliad—have all sorts of connections and resonances with long-studied ideas in philosophy. “Didn’t Kant talk about that? Isn’t that similar to Leibniz?”, etc. I’ve wanted to try to understand these historical connections. But while I’ve done a lot of work on the historical development of ideas, the ideas in question have tended to be more focused, and more tied to concrete formalism than they usually are in philosophy. “Did Kant actually mean that, or something completely different?” You might have to understand all his works to know. And that’s more than I think I can do.

\n

I invented the concept of the ruliad as a matter of science. But it’s now clear that the ruliad has all sorts of connections and resonances not only with philosophy but also with theology. Indeed, in a great many belief systems there’s always been the idea that somehow in the end “everything is one”. In cases where this gets slightly more formalized, there’s often some kind of combinatorial enumeration involved (think: I Ching, or various versions of “counting the names of God”).

\n

There are all sorts of examples where long-surviving “ancient beliefs” end up having something to them, even if the specific methods of post-1600s science don’t have much to say about them. One example is the notion of a soul, which we might now see as an ancient premonition of the modern notion of abstract computation. And whenever there’s a belief that’s ancient, there’s likely to have been lots of thinking done around it over the millennia. So if we can, for example, see a connection to the ruliad, we can expect to leverage that thinking. And perhaps also be able to provide new input that can refine the belief system in interesting and valuable ways.

\n

I’m always interested in different viewpoints about things—whether from science, philosophy, theology, wherever. And an extreme version of this is to think about how other “alien” minds might view things. Nowadays I think of different minds as effectively being at different places in the ruliad. Humans with similar backgrounds have minds that are close in rulial space. Cats and dogs have minds that are further away. And the weather (with its “mind of its own”) is still further.

\n

Now that we have AIs we potentially have a way to study the correspondence—and communication—between “different minds”. I looked at one aspect of this in my “cats” piece. But my recent work on the foundations of machine learning suggests a broader approach, that can also potentially tell us things about the fundamental character of language, and about how it serves as a medium that can “transport thoughts” from one mind to another.

\n

Many non-human animals seem to have at least some form of language—though mostly in effect just a few standalone words. But pretty unquestionably the greatest single invention of our species is language—and particularly compositional language where words and phrases can fit together in an infinite number of ways. But is there something beyond compositional language? And, for example, where might we get if our brains were bigger?

\n

With the 100 billion neurons in our brains, we seem to be able to handle about 50,000 words. If we had a trillion neurons we’d probably be able to handle more words (though perhaps more slowly), in effect letting us describe more things more easily. But what about something fundamentally beyond compositional language? Something perhaps “higher order”?

\n

With a word we are in effect conflating all instances of a certain concept into a single object that we can then work with. But typically with ordinary words we’re dealing with what we might call “static concepts”. So what about “ways of thinking”, or paradigms? They’re more like active, functional concepts. And it’s a bit like dogs versus us: dogs deal with a few standalone words; we “package” those together into whole sentences and beyond. And at the next level, we could imagine in effect packaging things like generators of meaningful sentences.

\n

Interestingly enough, we have something of a preview of ideas like this—in computational language. And this is one of those places where my efforts in science—and philosophy—start to directly intersect with my efforts in technology.

\n

The foundation of the Wolfram Language is the idea of representing everything in computational terms, and in particular in symbolic computational terms. And one feature of such a representation is that it can encompass both “data” and “code”—i.e. both things one might think about, and ways one might think about them.

\n

I first started building Wolfram Language as a practical tool—though one very much informed by my foundational ideas. And now, four decades later, the Wolfram Language has emerged as the largest single project of my life, and something that, yes, I expect to always put immense effort into. It wasn’t long ago that we finally finished my 1991 to-do list for Wolfram Language—and we have many projects running now that will take years to complete. But the mission has always remained the same: to take the concept of computation and apply it as broadly as possible, through the medium of computational language.

\n

Now, however, I have some additional context for that—viewing computational language as a bridge from what we humans think about to what’s possible in the computational universe. And this helps in framing some of the ways to expand the foundations of our computational language, for example to multicomputation, or to hypergraph-based representations. It also helps in understanding the character of current AI, and how it needs to interact with computational language.

\n

In the Wolfram Language we’ve been steadily trying to create a representation for everything. And when it comes to definitive, objective things we’ve gotten a long way. But there’s more than that in everyday discourse. For example, I might say “I’m going to drink a glass of orange juice.” Well, we do just fine at representing “a glass of orange juice” in the Wolfram Language, and we can compute lots of things—like nutrition content—about it. But what about “I’m going to drink…”? For that we need something different.

\n

And, actually, I’ve been thinking for a shockingly long time about what one might need. I first considered the question in the early 1980s, in connection with “extending SMP to AI”. I learned about the attempts to make “philosophical languages” in the 1600s, and about some of the thinking around modern conlangs (constructed languages). Something that always held me back, though, was use cases. Yes, I could see how one could use things like this for tasks like customer service. But I wasn’t too excited about that.

\n

But finally there was blockchain, and with it, smart contracts. And around 2015 I started thinking about how one might represent contracts in general not in legalese but in some precise computational way. And the result was that I began to crispen my ideas about what I called “symbolic discourse language”. I thought about how this might relate to questions like a “constitution for AIs” and so on. But I never quite got around to actually starting to design the specifics of the symbolic discourse language.

\n

But then along came LLMs, together with my theory that their success had to do with a “semantic grammar” of language. And finally now we’ve launched a serious project to build a symbolic discourse language. And, yes, it’s a difficult language design problem, deeply entangled with a whole range of foundational issues in philosophy. But as, by now at least, the world’s most experienced language designer (for better or worse), I feel a responsibility to try to do it.

\n

In addition to language design, there’s also the question of making all the various “symbolic calculi” that describe in appropriately coarse terms the operation of the world. Calculi of motion. Calculi of life (eating, dying, etc.). Calculi of human desires. Etc. As well as calculi that are directly supported by the computation and knowledge in the Wolfram Language.

\n

And just as LLMs can provide a kind of conversational linguistic interface to the Wolfram Language, one can expect them also to do this to our symbolic discourse language. So the pattern will be similar to what it is for Wolfram Language: the symbolic discourse language will provide a formal and (at least within its purview) correct underpinning for the LLM. It may lose the poetry of language that the LLM handles. But from the outset it’ll get its reasoning straight.

\n

The symbolic discourse language is a broad project. But in some sense breadth is what I have specialized in. Because that’s what’s needed to build out the Wolfram Language, and that’s what’s needed in my efforts to pull together the foundations of so many fields.

\n

And in maintaining a broad range of interests there are some where I imagine that someday there’ll be a project I can do, but there may for example be many years of “ambient technology” that are needed before that project will be feasible. Usually, though, I have some “conceptual idea” of what the project might be. For example, I’ve followed robotics, imagining that one day there’ll be a way to do “general-purpose robotics”, perhaps constructing everything out of modular elements. I’ve followed biomedicine, partly out of personal self interest, and partly because I think it’ll relate to some of the foundational questions I’m asking in biology.

\n

But in addition to all the projects where the goal is basic research, or technology development, I’m also hoping to pursue my interests in education. Much of what I hope to do relates to content, but some of it relates to access and motivation. I don’t have perfect evidence, but I strongly believe there’s a lot of young talent out there in the world that never manages to connect for example with things like the educational programs we put on. We–and I—have tried quite hard over the years to “bridge the gap”. But with the world as it is, it’s proved remarkably difficult. But it’s still a problem I’d like to solve, and I’ll keep picking away at it, hoping to change for the better some kids’ “trajectories”.

\n

But about content I believe my path is clearer. With the modern Wolfram Language I think we’ve gone a long way towards being able to take computational thinking about almost anything, and being able to represent it in a formalized way, and compute from it. But how do people manage to do the computational thinking in the first place? Well, like mathematical thinking and other formalized kinds of thinking, they have to learn how to do it.

\n

For years people have been telling me I should “write the book” to teach this. And finally in January of this year I started. I’m not sure how long it will take, but I’ll soon be starting to post sections I’ve written so far.

\n

My goal is to create a general book—and course—that’s an introduction to computational thinking at a level suitable for typical first-year college students. Lots of college students these days say they want to study “computer science”. But really it’s computational X for some field X that they’re ultimately interested in. And neither the theoretical nor the engineering aspects of typical “computer science” are what’s most relevant to them. What they need to know is computational thinking as it might be applied to computational X—not “CS” but what one might call “CX”.

\n

So what will CX101 be like? In some ways more like a philosophy course than a CS one. Because in the end it’s about generally learning to think, albeit in the new paradigm of computation. And the point is that once someone has a clear computational conceptualization of something, then it’s our job in the Wolfram Language to make sure that it’s easy for them to concretely implement it.

\n

But how does one teach computational conceptualization? What I’ve concluded is that one needs to anchor it in actual things in the world. Geography. Video. Genomics. Yes, there are principles to explain. But they need practical context to make them useful, or even understandable. And what I’m finding is that framing everything computationally makes things incredibly much easier to explain than before. (A test example coming soon is whether I can easily explain math ideas like algebra and calculus this way.)

\n

OK, so that’s a lot of projects. But I’m excited about all of them, and can’t wait to make them happen. At an age when many of my contemporaries are retiring, I feel like I’m just getting started. And somehow the way my projects keep on connecting back to things I did decades ago makes me feel—in a computational irreducibility kind of way—that there’s something necessary about all the steps I’ve taken. I feel like the things I’ve done have let me climb some hills. But now there are many more hills that have come into view. And I look forward to being able to climb those too. For myself and for the world.

\n", + "category": "Life & Times", + "link": "https://writings.stephenwolfram.com/2024/08/five-most-productive-years-what-happened-and-whats-next/", + "creator": "Stephen Wolfram", + "pubDate": "Thu, 29 Aug 2024 16:31:46 +0000", + "enclosure": "", + "enclosureType": "", + "image": "", + "id": "", + "language": "en", + "folder": "", + "feed": "wolfram", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "a9ab72735d41e27ea935637018d87dff", + "highlights": [] + }, + { + "title": "What’s Really Going On in Machine Learning? Some Minimal Models", + "description": "\"\"The Mystery of Machine Learning It’s surprising how little is known about the foundations of machine learning. Yes, from an engineering point of view, an immense amount has been figured out about how to build neural nets that do all kinds of impressive and sometimes almost magical things. But at a fundamental level we still […]", + "content": "\"\"

\"What's

\n

The Mystery of Machine Learning

\n

It’s surprising how little is known about the foundations of machine learning. Yes, from an engineering point of view, an immense amount has been figured out about how to build neural nets that do all kinds of impressive and sometimes almost magical things. But at a fundamental level we still don’t really know why neural nets “work”—and we don’t have any kind of “scientific big picture” of what’s going on inside them.

\n

The basic structure of neural networks can be pretty simple. But by the time they’re trained up with all their weights, etc. it’s been hard to tell what’s going on—or even to get any good visualization of it. And indeed it’s far from clear even what aspects of the whole setup are actually essential, and what are just “details” that have perhaps been “grandfathered” all the way from when computational neural nets were first invented in the 1940s.

\n

Well, what I’m going to try to do here is to get “underneath” this—and to “strip things down” as much as possible. I’m going to explore some very minimal models—that, among other things, are more directly amenable to visualization. At the outset, I wasn’t at all sure that these minimal models would be able to reproduce any of the kinds of things we see in machine learning. But, rather surprisingly, it seems they can.

\n

And the simplicity of their construction makes it much easier to “see inside them”—and to get more of a sense of what essential phenomena actually underlie machine learning. One might have imagined that even though the training of a machine learning system might be circuitous, somehow in the end the system would do what it does through some kind of identifiable and “explainable” mechanism. But we’ll see that in fact that’s typically not at all what happens.

\n

Instead it looks much more as if the training manages to home in on some quite wild computation that “just happens to achieve the right results”. Machine learning, it seems, isn’t building structured mechanisms; rather, it’s basically just sampling from the typical complexity one sees in the computational universe, picking out pieces whose behavior turns out to overlap what’s needed. And in a sense, therefore, the possibility of machine learning is ultimately yet another consequence of the phenomenon of computational irreducibility.

\n

Why is that? Well, it’s only because of computational irreducibility that there’s all that richness in the computational universe. And, more than that, it’s because of computational irreducibility that things end up being effectively random enough that the adaptive process of training a machine learning system can reach success without getting stuck.

\n

But the presence of computational irreducibility also has another important implication: that even though we can expect to find limited pockets of computational reducibility, we can’t expect a “general narrative explanation” of what a machine learning system does. In other words, there won’t be a traditional (say, mathematical) “general science” of machine learning (or, for that matter, probably also neuroscience). Instead, the story will be much closer to the fundamentally computational “new kind of science” that I’ve explored for so long, and that has brought us our Physics Project and the ruliad.

\n

In many ways, the problem of machine learning is a version of the general problem of adaptive evolution, as encountered for example in biology. In biology we typically imagine that we want to adaptively optimize some overall “fitness” of a system; in machine learning we typically try to adaptively “train” a system to make it align with certain goals or behaviors, most often defined by examples. (And, yes, in practice this is often done by trying to minimize a quantity normally called the “loss”.)

\n

And while in biology there’s a general sense that “things arise through evolution”, quite how this works has always been rather mysterious. But (rather to my surprise) I recently found a very simple model that seems to do well at capturing at least some of the most essential features of biological evolution. And while the model isn’t the same as what we’ll explore here for machine learning, it has some definite similarities. And in the end we’ll find that the core phenomena of machine learning and of biological evolution appear to be remarkably aligned—and both fundamentally connected to the phenomenon of computational irreducibility.

\n

Most of what I’ll do here focuses on foundational, theoretical questions. But in understanding more about what’s really going on in machine learning—and what’s essential and what’s not—we’ll also be able to begin to see how in practice machine learning might be done differently, potentially with more efficiency and more generality.

\n

Traditional Neural Nets

\n

\n
\n

Note: Click any diagram to get Wolfram Language code to reproduce it.

\n
\n

To begin the process of understanding the essence of machine learning, let’s start from a very traditional—and familiar—example: a fully connected (“multilayer perceptron”) neural net that’s been trained to compute a certain function f[x]:

\n
\n
\n

\n

If one gives a value x as input at the top, then after “rippling through the layers of the network” one gets a value at the bottom that (almost exactly) corresponds to our function f[x]:

\n
\n
\n

\n

Scanning through different inputs x, we see different patterns of intermediate values inside the network:

\n
\n
\n

\n

And here’s (on a linear and log scale) how each of these intermediate values changes with x. And, yes, the way the final value (highlighted here) emerges looks very complicated:

\n
\n
\n

\n

So how is the neural net ultimately put together? How are these values that we’re plotting determined? We’re using the standard setup for a fully connected multilayer network. Each node (“neuron”) on each layer is connected to all nodes on the layer above—and values “flow” down from one layer to the next, being multiplied by the (positive or negative) “weight” (indicated by color in our pictures) associated with the connection through which they flow. The value of a given neuron is found by totaling up all its (weighted) inputs from the layer before, adding a “bias” value for that neuron, and then applying to the result a certain (nonlinear) “activation function” (here ReLU or Ramp[z], i.e. If[z < 0, 0, z]).

\n

What overall function a given neural net will compute is determined by the collection of weights and biases that appear in the neural net (along with its overall connection architecture, and the activation function it’s using). The idea of machine learning is to find weights and biases that produce a particular function by adaptively “learning” from examples of that function. Typically we might start from a random collection of weights, then successively tweak weights and biases to “train” the neural net to reproduce the function:

\n
\n
\n

\n

We can get a sense of how this progresses (and, yes, it’s complicated) by plotting successive changes in individual weights over the course of the training process (the spikes near the end come from “neutral changes” that don’t affect the overall behavior):

\n
\n
\n

\n

The overall objective in the training is progressively to decrease the “loss”—the average (squared) difference between true values of f[x] and those generated by the neural net. The evolution of the loss defines a “learning curve” for the neural net, with the downward glitches corresponding to points where the neural net in effect “made a breakthrough” in being able to represent the function better:

\n
\n
\n

\n

It’s important to note that typically there’s randomness injected into neural net training. So if one runs the training multiple times, one will get different networks—and different learning curves—every time:

\n
\n
\n

\n
\n
\n

\n

But what’s really going on in neural net training? Effectively we’re finding a way to “compile” a function (at least to some approximation) into a neural net with a certain number of (real-valued) parameters. And in the example here we happen to be using about 100 parameters.

\n

But what happens if we use a different number of parameters, or set up the architecture of our neural net differently? Here are a few examples, indicating that for the function we’re trying to generate, the network we’ve been using so far is pretty much the smallest that will work:

\n
\n
\n

\n

And, by the way, here’s what happens if we change our activation function from ReLU
\n to the smoother ELU :

\n
\n
\n

\n

Later we’ll talk about what happens when we do machine learning with discrete systems. And in anticipation of that, it’s interesting to see what happens if we take a neural net of the kind we’ve discussed here, and “quantize” its weights (and biases) in discrete levels:

\n
\n
\n

\n

The result is that (as recent experience with large-scale neural nets has also shown) the basic “operation” of the neural net does not require precise real numbers, but survives even when the numbers are at least somewhat discrete—as this 3D rendering as a function of the discreteness level δ also indicates:

\n
\n
\n

\n

Simplifying the Topology: Mesh Neural Nets

\n

So far we’ve been discussing very traditional neural nets. But to do machine learning, do we really need systems that have all those details? For example, do we really need every neuron on each layer to get an input from every neuron on the previous layer? What happens if instead every neuron just gets input from at most two others—say with the neurons effectively laid out in a simple mesh? Quite surprisingly, it turns out that such a network is still perfectly able to generate a function like the one we’ve been using as an example:

\n
\n
\n

\n

And one advantage of such a “mesh neural net” is that—like a cellular automaton—its “internal behavior” can readily be visualized in a rather direct way. So, for example, here are visualizations of “how the mesh net generates its output”, stepping through different input values x:

\n
\n
\n

\n

And, yes, even though we can visualize it, it’s still hard to understand “what’s going on inside”. Looking at the intermediate values of each individual node in the network as a function of x doesn’t help much, though we can “see something happening” at places where our function f[x] has jumps:

\n
\n
\n

\n

So how do we train a mesh neural net? Basically we can use the same procedure as for a fully connected network of the kind we saw above (ReLU activation functions don’t seem to work well for mesh nets, so we’re using ELU here):

\n
\n
\n

\n

Here’s the evolution of differences in each individual weight during the training process:

\n
\n
\n

\n

And here are results for different random seeds:

\n
\n
\n

\n
\n
\n

\n

At the size we’re using, our mesh neural nets have about the same number of connections (and thus weights) as our main example of a fully connected network above. And we see that if we try to reduce the size of our mesh neural net, it doesn’t do well at reproducing our function:

\n
\n
\n

\n

Making Everything Discrete: A Biological Evolution Analog

\n

Mesh neural nets simplify the topology of neural net connections. But, somewhat surprisingly at first, it seems as if we can go much further in simplifying the systems we’re using—and still successfully do versions of machine learning. And in particular we’ll find that we can make our systems completely discrete.

\n

The typical methodology of neural net training involves progressively tweaking real-valued parameters, usually using methods based on calculus, and on finding derivatives. And one might imagine that any successful adaptive process would ultimately have to rely on being able to make arbitrarily small changes, of the kind that are possible with real-valued parameters.

\n

But in studying simple idealizations of biological evolution I recently found striking examples where this isn’t the case—and where completely discrete systems seemed able to capture the essence of what’s going on.

\n

As an example consider a (3-color) cellular automaton. The rule is shown on the left, and the behavior one generates by repeatedly applying that rule (starting from a single-cell initial condition) is shown on the right:

\n
\n
\n

\n

The rule has the property that the pattern it generates (from a single-cell initial condition) survives for exactly 40 steps, and then dies out (i.e. every cell becomes white). And the important point is that this rule can be found by a discrete adaptive process. The idea is to start, say, from a null rule, and then at each step to randomly change a single outcome out of the 27 in the rule (i.e. make a “single-point mutation” in the rule). Most such changes will cause the “lifetime” of the pattern to get further from our target of 40—and these we discard. But gradually we can build up “beneficial mutations”

\n
\n
\n

\n

that through “progressive adaptation” eventually get to our original lifetime-40 rule:

\n
\n
\n

\n

We can make a plot of all the attempts we made that eventually let us reach lifetime 40—and we can think of this progressive “fitness” curve as being directly analogous to the loss curves in machine learning that we saw before:

\n
\n
\n

\n

If we make different sequences of random mutations, we’ll get different paths of adaptive evolution, and different “solutions” for rules that have lifetime 40:

\n
\n
\n

\n

Two things are immediately notable about these. First, that they essentially all seem to be “using different ideas” to reach their goal (presumably analogous to the phenomenon of different branches in the tree of life). And second, that none of them seem to be using a clear “mechanical procedure” (of the kind we might construct through traditional engineering) to reach their goal. Instead, they seem to be finding “natural” complicated behavior that just “happens” to achieve the goal.

\n

It’s nontrivial, of course, that this behavior can achieve a goal like the one we’ve set here, as well as that simple selection based on random point mutations can successfully reach the necessary behavior. But as I discussed in connection with biological evolution, this is ultimately a story of computational irreducibility—particularly in generating diversity both in behavior, and in the paths necessary to reach it.

\n

But, OK, so how does this model of adaptive evolution relate to systems like neural nets? In the standard language of neural nets, our model is like a discrete analog of a recurrent convolutional network. It’s “convolutional” because at any given step the same rule is applied—locally—throughout an array of elements. It’s “recurrent” because in effect data is repeatedly “passed through” the same rule. The kinds of procedures (like “backpropagation”) typically used to train traditional neural nets wouldn’t be able to train such a system. But it turns out that—essentially as a consequence of computational irreducibility—the very simple method of successive random mutation can be successful.

\n

Machine Learning in Discrete Rule Arrays

\n

Let’s say we want to set up a system like a neural net—or at least a mesh neural net—but we want it to be completely discrete. (And I mean “born discrete”, not just discretized from an existing continuous system.) How can we do this? One approach (that, as it happens, I first considered in the mid-1980s—but never seriously explored) is to make what we can call a “rule array”. Like in a cellular automaton there’s an array of cells. But instead of these cells always being updated according to the same rule, each cell at each place in the cellular automaton analog of “spacetime” can make a different choice of what rule it will use. (And although it’s a fairly extreme idealization, we can potentially imagine that these different rules represent a discrete analog of different local choices of weights in a mesh neural net.)

\n

As a first example, let’s consider a rule array in which there are two possible choices of rules: k = 2, r = 1 cellular automaton rules 4 and 146 (which are respectively class 2 and class 3):

\n
\n
\n

\n

A particular rule array is defined by which of these rules is going to be used at each (“spacetime”) position in the array. Here are a few examples. In all cases we’re starting from the same single-cell initial condition. But in each case the rule array has a different arrangement of rule choices—with cells “running” rule 4 being given a background, and those running rule 146 a one:

\n
\n
\n

\n

We can see that different choices of rule array can yield very different behaviors. But (in the spirit of machine learning) can we in effect “invert this”, and find a rule array that will give some particular behavior we want?

\n

A simple approach is to do the direct analog of what we did in our minimal modeling of biological evolution: progressively make random “single-point mutations”—here “flipping” the identity of just one rule in the rule array—and then keeping only those mutations that don’t make things worse.

\n

As our sample objective, let’s ask to find a rule array that makes the pattern generated from a single cell using that rule array “survive” for exactly 50 steps. At first it might not be obvious that we’d be able to find such a rule array. But in fact our simple adaptive procedure easily manages to do this:

\n
\n
\n

\n

As the dots here indicate, many mutations don’t lead to longer lifetimes. But every so often, the adaptive process has a “breakthrough” that increases the lifetime—eventually reaching 50:

\n
\n
\n

\n

Just as in our model of biological evolution, different random sequences of mutations lead to different “solutions”, here to the problem of “living for exactly 50 steps”:

\n
\n
\n

\n

Some of these are in effect “simple solutions” that require only a few mutations. But most—like most of our examples in biological evolution—seem more as if they just “happen to work”, effectively by tapping into just the right, fairly complex behavior.

\n

Is there a sharp distinction between these cases? Looking at the collection of “fitness” (AKA “learning”) curves for the examples above, it doesn’t seem so:

\n
\n
\n

\n

It’s not too difficult to see how to “construct a simple solution” just by strategically placing a single instance of the second rule in the rule array:

\n
\n
\n

\n

But the point is that adaptive evolution by repeated mutation normally won’t “discover” this simple solution. And what’s significant is that the adaptive evolution can nevertheless still successfully find some solution—even though it’s not one that’s “understandable” like this.

\n

The cellular automaton rules we’ve been using so far take 3 inputs. But it turns out that we can make things even simpler by just putting ordinary 2-input Boolean functions into our rule array. For example, we can make a rule array from And and Xor functions (r = 1/2 rules 8 and 6):

\n
\n
\n

\n

Different And+Xor ( + ) rule arrays show different behavior:

\n
\n
\n

\n

But are there for example And+Xor rule arrays that will compute any of the 16 possible (2-input) functions? We can’t get Not or any of the 8 other functions with —but it turns out we can get all 8 functions with (additional inputs here are assumed to be ):

\n
\n
\n

\n

And in fact we can also set up And+Xor rule arrays for all other “even” Boolean functions. For example, here are rule arrays for the 3-input rule 30 and rule 110 Boolean functions:

\n
\n
\n

\n

It may be worth commenting that the ability to set up such rule arrays is related to functional completeness of the underlying rules we’re using—though it’s not quite the same thing. Functional completeness is about setting up arbitrary formulas, that can in effect allow long-range connections between intermediate results. Here, all information has to explicitly flow through the array. But for example the functional completeness of Nand (r = 1/2 rule 7, ) allows it to generate all Boolean functions when combined for example with First (r = 1/2 rule 12, ), though sometimes the rule arrays required are quite large:

\n
\n
\n

\n

OK, but what happens if we try to use our adaptive evolution process—say to solve the problem of finding a pattern that survives for exactly 30 steps? Here’s a result for And+Xor rule arrays:

\n
\n
\n

\n

And here are examples of other “solutions” (none of which in this case look particularly “mechanistic” or “constructed”):

\n
\n
\n

\n

But what about learning our original f[x] = function? Well, first we have to decide how we’re going to represent the numbers x and f[x] in our discrete rule array system. And one approach is to do this simply in terms of the position of a black cell (“one-hot encoding”). So, for example, in this case there’s an initial black cell at a position corresponding to about x = –1.1. And then the result after passing through the rule array is a black cell at a position corresponding to f[x] = 1.0:

\n
\n
\n

\n

So now the question is whether we can find a rule array that successfully maps initial to final cell positions according to the mapping x f[x] we want. Well, here’s an example that comes at least close to doing this (note that the array is taken to be cyclic):

\n
\n
\n

\n

So how did we find this? Well, we just used a simple adaptive evolution process. In direct analogy to the way it’s usually done in machine learning, we set up “training examples”, here of the form:

\n
\n
\n

\n

Then we repeatedly made single-point mutations in our rule array, keeping those mutations where the total difference from all the training examples didn’t increase. And after 50,000 mutations this gave the final result above.

\n

We can get some sense of “how we got there” by showing the sequence of intermediate results where we got closer to the goal (as opposed to just not getting further from it):

\n
\n
\n

\n

Here are the corresponding rule arrays, in each case highlighting elements that have changed (and showing the computation of f[0] in the arrays):

\n
\n
\n

\n

Different sequences of random mutations will lead to different rule arrays. But with the setup defined here, the resulting rule arrays will almost always succeed in accurately computing f[x]. Here are a few examples—in which we’re specifically showing the computation of f[0]:

\n
\n
\n

\n

And once again an important takeaway is that we don’t see “identifiable mechanism” in what’s going on. Instead, it looks more as if the rule arrays we’ve got just “happen” to do the computations we want. Their behavior is complicated, but somehow we can manage to “tap into it” to compute our f[x].

\n

But how robust is this computation? A key feature of typical machine learning is that it can “generalize” away from the specific examples it’s been given. It’s never been clear just how to characterize that generalization (when does an image of a cat in a dog suit start being identified as an image of a dog?). But—at least when we’re talking about classification tasks—we can think of what’s going on in terms of basins of attraction that lead to attractors corresponding to our classes.

\n

It’s all considerably easier to analyze, though, in the kind of discrete system we’re exploring here. For example, we can readily enumerate all our training inputs (i.e. all initial states containing a single black cell), and then see how frequently these cause any given cell to be black:

\n
\n
\n

\n

By the way, here’s what happens to this plot at successive “breakthroughs” during training:

\n
\n
\n

\n

But what about all possible inputs, including ones that don’t just contain a single black cell? Well, we can enumerate all of them, and compute the overall frequency for each cell in the array to be black:

\n
\n
\n

\n

As we would expect, the result is considerably “fuzzier” than what we got purely with our training inputs. But there’s still a strong trace of the discrete values for f[x] that appeared in the training data. And if we plot the overall probability for a given final cell to be black, we see peaks at positions corresponding to the values 0 and 1 that f[x] takes on:

\n
\n
\n

\n

But because our system is discrete, we can explicitly look at what outcomes occur:

\n
\n
\n

\n

The most common overall is the “meaningless” all-white state—that basically occurs when the computation from the input “never makes it” to the output. But the next most common outcomes correspond exactly to f[x] = 0 and f[x] = 1. After that is the “superposition” outcome where f[x] is in effect “both 0 and 1”.

\n

But, OK, so what initial states are “in the basins of attraction of” (i.e. will evolve to) the various outcomes here? The fairly flat plots in the last column above indicate that the overall density of black cells gives little information about what attractor a particular initial state will evolve to.

\n

So this means we have to look at specific configurations of cells in the initial conditions. As an example, start from the initial condition

\n
\n
\n

\n

which evolves to:

\n
\n
\n

\n

Now we can ask what happens if we look at a sequence of slightly different initial conditions. And here we show in black and white initial conditions that still evolve to the original “attractor” state, and in pink ones that evolve to some different state:

\n
\n
\n

\n

What’s actually going on inside here? Here are a few examples, highlighting cells whose values change as a result of changing the initial condition:

\n
\n
\n

\n

As is typical in machine learning, there doesn’t seem to be any simple characterization of the form of the basin of attraction. But now we have a sense of what the reason for this is: it’s another consequence of computational irreducibility. Computational irreducibility gives us the effective randomness that allows us to find useful results by adaptive evolution, but it also leads to changes having what seem like random and unpredictable effects. (It’s worth noting, by the way, that we could probably dramatically improve the robustness of our attractor basins by specifically including in our training data examples that have “noise” injected.)

\n

Multiway Mutation Graphs

\n

In doing machine learning in practice, the goal is typically to find some collection of weights, etc. that successfully solve a particular problem. But in general there will be many such collections of weights, etc. With typical continuous weights and random training steps it’s very difficult to see what the whole “ensemble” of possibilities is. But in our discrete rule array systems, this becomes more feasible.

\n

Consider a tiny 2×2 rule array with two possible rules. We can make a graph whose edges represent all possible “point mutations” that can occur in this rule array:

\n
\n
\n

\n

In our adaptive evolution process, we’re always moving around a graph like this. But typically most “moves” will end up in states that are rejected because they increase whatever loss we’ve defined.

\n

Consider the problem of generating an And+Xor rule array in which we end with lifetime-4 patterns. Defining the loss as how far we are from this lifetime, we can draw a graph that shows all possible adaptive evolution paths that always progressively decrease the loss:

\n
\n
\n

\n

The result is a multiway graph of the type we’ve now seen in a great many kinds of situations—notably our recent study of biological evolution.

\n

And although this particular example is quite trivial, the idea in general is that different parts of such a graph represent “different strategies” for solving a problem. And—in direct analogy to our Physics Project and our studies of things like game graphs—one can imagine such strategies being laid out in a “branchial space” defined by common ancestry of configurations in the multiway graph.

\n

And one can expect that while in some cases the branchial graph will be fairly uniform, in other cases it will have quite separated pieces—that represent fundamentally different strategies. Of course, the fact that underlying strategies may be different doesn’t mean that the overall behavior or performance of the system will be noticeably different. And indeed one expects that in most cases computational irreducibility will lead to enough effective randomness that there’ll be no discernable difference.

\n

But in any case, here’s an example starting with a rule array that contains both And and Xor—where we observe distinct branches of adaptive evolution that lead to different solutions to the problem of finding a configuration with a lifetime of exactly 4:

\n
\n
\n

\n

Optimizing the Learning Process

\n

How should one actually do the learning in machine learning? In practical work with traditional neural nets, learning is normally done using systematic algorithmic methods like backpropagation. But so far, all we’ve done here is something much simpler: we’ve “learned” by successively making random point mutations, and keeping only ones that don’t lead us further from our goal. And, yes, it’s interesting that such a procedure can work at all—and (as we’ve discussed elsewhere) this is presumably very relevant to understanding phenomena like biological evolution. But, as we’ll see, there are more efficient (and probably much more efficient) methods of doing machine learning, even for the kinds of discrete systems we’re studying.

\n

Let’s start by looking again at our earlier example of finding an And+Xor rule array that gives a “lifetime” of exactly 30. At each step in our adaptive (“learning”) process we make a single-point mutation (changing a single rule in the rule array), keeping the mutation if it doesn’t take us further from our goal. The mutations gradually accumulate—every so often reaching a rule array that gives a lifetime closer to 30. Just as above, here’s a plot of the lifetime achieved by successive mutations—with the “internal” red dots corresponding to rejected mutations:

\n
\n
\n

\n

We see a series of “plateaus” at which mutations are accumulating but not changing the overall lifetime. And between these we see occasional “breakthroughs” where the lifetime jumps. Here are the actual rule array configurations for these breakthroughs, with mutations since the last breakthrough highlighted:

\n
\n
\n

\n

But in the end the process here is quite wasteful; in this example, we make a total of 1705 mutations, but only 780 of them actually contribute to generating the final rule array; all the others are discarded along the way.

\n

So how can we do better? One strategy is to try to figure out at each step which mutation is “most likely to make a difference”. And one way to do this is to try every possible mutation in turn at every step (as in multiway evolution)—and see what effect each of them has on the ultimate lifetime. From this we can construct a “change map” in which we give the change of lifetime associated with a mutation at every particular cell. The results will be different for every configuration of rule array, i.e. at every step in the adaptive evolution. But for example here’s what they are for the particular “breakthrough” configurations shown above (elements in regions that are colored gray won’t affect the result if they are changed; ones colored red will have a positive effect (with more intense red being more positive), and ones colored blue a negative one:

\n
\n
\n

\n

Let’s say we start from a random rule array, then repeatedly construct the change map and apply the mutation that it implies gives the most positive change—in effect at each step following the “path of steepest descent” to get to the lifetime we want (i.e. reduce the loss). Then the sequence of “breakthrough” configurations we get is:

\n
\n
\n

\n

And this in effect corresponds to a slightly more direct “path to a solution” than our sequence of pure single-point mutations.

\n

By the way, the particular problem of reaching a certain lifetime has a simple enough structure that this “steepest descent” method—when started from a simple uniform rule array—finds a very “mechanical” (if slow) path to a solution:

\n
\n
\n

\n

What about the problem of learning f[x] = ? Once again we can make a change map based on the loss we define. Here are the results for a sequence of “breakthrough” configurations. The gray regions are ones where changes will be “neutral”, so that there’s still exploration that can be done without affecting the loss. The red regions are ones that are in effect “locked in” and where any changes would be deleterious in terms of loss:

\n
\n
\n

\n

So what happens in this case if we follow the “path of steepest descent”, always making the change that would be best according to the change map? Well, the results are actually quite unsatisfactory. From almost any initial condition the system quickly gets stuck, and never finds any satisfactory solution. In effect it seems that deterministically following the path of steepest descent leads us to a “local minimum” from which we cannot escape. So what are we missing in just looking at the change map? Well, the change map as we’ve constructed it has the limitation that it’s separately assessing the effect of each possible individual mutation. It doesn’t deal with multiple mutations at a time—which could well be needed in general if one’s going to find the “fastest path to success”, and avoid getting stuck.

\n

But even in constructing the change map there’s already a problem. Because at least the direct way of computing it scales quite poorly. In an n×n rule array we have to check the effect of flipping about n2 values, and for each one we have to run the whole system—taking altogether about n4 operations. And one has to do this separately for each step in the learning process.

\n

So how do traditional neural nets avoid this kind of inefficiency? The answer in a sense involves a mathematical trick. And at least as it’s usually presented it’s all based on the continuous nature of the weights and values in neural nets—which allow us to use methods from calculus.

\n

Let’s say we have a neural net like this

\n
\n
\n

\n

that computes some particular function f[x]:

\n
\n
\n

\n

We can ask how this function changes as we change each of the weights in the network:

\n
\n
\n

\n

And in effect this gives us something like our “change map” above. But there’s an important difference. Because the weights are continuous, we can think about infinitesimal changes to them. And then we can ask questions like “How does f[x] change when we make an infinitesimal change to a particular weight wi?”—or equivalently, “What is the partial derivative of f with respect to wi at the point x?” But now we get to use a key feature of infinitesimal changes: that they can always be thought of as just “adding linearly” (essentially because ε2 can always be ignored compared to ε). Or, in other words, we can summarize any infinitesimal change just by giving its “direction” in weight space, i.e. a vector that says how much of each weight should be (infinitesimally) changed. So if we want to change f[x] (infinitesimally) as quickly as possible, we should go in the direction of steepest descent defined by all the derivatives of f with respect to the weights.

\n

In machine learning, we’re typically trying in effect to set the weights so that the form of f[x] we generate successfully minimizes whatever loss we’ve defined. And we do this by incrementally “moving in weight space”—at every step computing the direction of steepest descent to know where to go next. (In practice, there are all sorts of tricks like “ADAM” that try to optimize the way to do this.)

\n

But how do we efficiently compute the partial derivative of f with respect to each of the weights? Yes, we could do the analog of generating pictures like the ones above, separately for each of the weights. But it turns out that a standard result from calculus gives us a vastly more efficient procedure that in effect “maximally reuses” parts of the computation that have already been done.

\n

It all starts with the textbook chain rule for the derivative of nested (i.e. composed) functions:

\n
\n
\n

\n
\n
\n

\n
\n
\n

\n

This basically says that the (infinitesimal) change in the value of the “whole chain” d[c[b[a[x]]]] can be computed as a product of (infinitesimal) changes associated with each of the “links” in the chain. But the key observation is then that when we get to the computation of the change at a certain point in the chain, we’ve already had to do a lot of the computation we need—and so long as we stored those results, we always have only an incremental computation to perform.

\n

So how does this apply to neural nets? Well, each layer in a neural net is in effect doing a function composition. So, for example, our d[c[b[a[x]]]] is like a trivial neural net:

\n
\n
\n

\n

But what about the weights, which, after all, are what we are trying to find the effect of changing? Well, we could include them explicitly in the function we’re computing:

\n
\n
\n

\n

And then we could in principle symbolically compute the derivatives with respect to these weights:

\n
\n
\n

\n

For our network above

\n
\n
\n

\n

the corresponding expression (ignoring biases) is

\n
\n
\n

\n

where ϕ denotes our activation function. Once again we’re dealing with nested functions, and once again—though it’s a bit more intricate in this case—the computation of derivatives can be done by incrementally evaluating terms in the chain rule and in effect using the standard neural net method of “backpropagation”.

\n

So what about the discrete case? Are there similar methods we can use there? We won’t discuss this in detail here, but we’ll give some indications of what’s likely to be involved.

\n

As a potentially simpler case, let’s consider ordinary cellular automata. The analog of our change map asks how the value of a particular “output” cell is affected by changes in other cells—or in effect what the “partial derivative” of the output value is with respect to changes in values of other cells.

\n

For example, consider the highlighted “output” cell in this cellular automaton evolution:

\n
\n
\n

\n

Now we can look at each cell in this array, and make a change map based on seeing whether flipping the value of just that cell (and then running the cellular automaton forwards from that point) would change the value of the output cell:

\n
\n
\n

\n

The form of the change map is different if we look at different “output cells”:

\n
\n
\n

\n

Here, by the way, are some larger change maps for this and a couple of other cellular automaton rules:

\n
\n
\n

\n

But is there a way to construct such change maps incrementally? One might have thought that there would immediately be at least for cellular automata that (unlike the cases here) are fundamentally reversible. But actually such reversibility doesn’t seem to help much—because although it allows us to “backtrack” whole states of the cellular automaton, it doesn’t allow us to trace the separate effects of individual cells.

\n

So how about using discrete analogs of derivatives and the chain rule? Let’s for example call the function computed by one step in rule 30 cellular automaton evolution w[x, y, z]. We can think of the “partial derivative” of this function with respect to x at the point x as representing whether the output of w changes when x is flipped starting from the value given:

\n
\n
\n

\n

(Note that “no change” is indicated as False or , while a change is indicated as True or . And, yes, one can either explicitly compute the rule outcomes here, and then deduce from them the functional form, or one can use symbolic rules to directly deduce the functional form.)

\n

One can compute a discrete analog of a derivative for any Boolean function. For example, we have

\n
\n
\n

\n
\n
\n

\n

and

\n
\n
\n

\n

which we can write as:

\n
\n
\n

\n

We also have:

\n
\n
\n

\n
\n
\n

\n
\n
\n

\n
\n
\n

\n
\n
\n

\n
\n
\n

\n

And here is a table of “Boolean derivatives” for all 2-input Boolean functions:

\n
\n
\n

\n

And indeed there’s a whole “Boolean calculus” one can set up for these kinds of derivatives. And in particular, there’s a direct analog of the chain rule:

\n
\n
\n

\n

where Xnor[x,y] is effectively the equality test x == y:

\n
\n
\n

\n

But, OK, how do we use this to create our change maps? In our simple cellular automaton case, we can think of our change map as representing how a change in an output cell “propagates back” to previous cells. But if we just try to apply our discrete calculus rules we run into a problem: different “chain rule chains” can imply different changes in the value of the same cell. In the continuous case this path dependence doesn’t happen because of the way infinitesimals work. But in the discrete case it does. And ultimately we’re doing a kind of backtracking that can really be represented faithfully only as a multiway system. (Though if we just want probabilities, for example, we can consider averaging over branches of the multiway system—and the change maps we showed above are effectively the result of thresholding over the multiway system.)

\n

But despite the appearance of such difficulties in the “simple” cellular automaton case, such methods typically seem to work better in our original, more complicated rule array case. There’s a bunch of subtlety associated with the fact that we’re finding derivatives not only with respect to the values in the rule array, but also with respect to the choice of rules (which are the analog of weights in the continuous case).

\n

Let’s consider the And+Xor rule array:

\n
\n
\n

\n

Our loss is the number of cells whose values disagree with the row shown at the bottom. Now we can construct a change map for this rule array both in a direct “forward” way, and “backwards” using our discrete derivative methods (where we effectively resolve the small amount of “multiway behavior” by always picking “majority” values):

\n
\n
\n

\n

The results are similar, though in this case not exactly the same. Here are a few other examples:

\n
\n
\n

\n

And, yes, in detail there are essentially always local differences between the results from the forward and backward methods. But the backward method—like in the case of backpropagation in ordinary neural nets—can be implemented much more efficiently. And for purposes of practical machine learning it’s actually likely to be perfectly satisfactory—especially given that the forward method is itself only providing an approximation to the question of which mutations are best to do.

\n

And as an example, here are the results of the forward and backward methods for the problem of learning the function f[x] = , for the “breakthrough” configurations that we showed above:

\n
\n
\n

\n

What Can Be Learned?

\n

We’ve now shown quite a few examples of machine learning in action. But a fundamental question we haven’t yet addressed is what kind of thing can actually be learned by machine learning. And even before we get to this, there’s another question: given a particular underlying type of system, what kinds of functions can it even represent?

\n

As a first example consider a minimal neural net of the form (essentially a single-layer perceptron):

\n
\n
\n

\n

With ReLU (AKA Ramp) as the activation function and the first set of weights all taken to be 1, the function computed by such a neural net has the form:

\n
\n
\n

\n

With enough weights and biases this form can represent any piecewise linear function—essentially just by moving around ramps using biases, and scaling them using weights. So for example consider the function:

\n
\n
\n

\n

This is the function computed by the neural net above—and here’s how it’s built up by adding in successive ramps associated with the individual intermediate nodes (neurons):

\n
\n
\n

\n

(It’s similarly possible to get all smooth functions from activation functions like ELU, etc.)

\n

Things get slightly more complicated if we try to represent functions with more than one argument. With a single intermediate layer we can only get “piecewise (hyper)planar” functions (i.e. functions that change direction only at linear “fault lines”):

\n
\n
\n

\n

But already with a total of two intermediate layers—and sufficiently many nodes in each of these layers—we can generate any piecewise function of any number of arguments.

\n

If we limit the number of nodes, then roughly we limit the number of boundaries between different linear regions in the values of the functions. But as we increase the number of layers with a given number of nodes, we basically increase the number of sides that polygonal regions within the function values can have:

\n
\n
\n

\n

So what happens with the mesh nets that we discussed earlier? Here are a few random examples, showing results very similar to shallow, fully connected networks with a comparable total number of nodes:

\n
\n
\n

\n

OK, so how about our fully discrete rule arrays? What functions can they represent? We already saw part of the answer earlier when we generated rule arrays to represent various Boolean functions. It turns out that there is a fairly efficient procedure based on Boolean satisfiability for explicitly finding rule arrays that can represent a given function—or determine that no rule array (say of a given size) can do this.

\n

Using this procedure, we can find minimal And+Xor rule arrays that represent all (“even”) 3-input Boolean functions (i.e. r = 1 cellular automaton rules):

\n
\n
\n

\n

It’s always possible to specify any n-input Boolean function by an array of 2n bits, as in:

\n
\n
\n

\n

But we see from the pictures above that when we “compile” Boolean functions into And+Xor rule arrays, they can take different numbers of bits (i.e. different numbers of elements in the rule array). (In effect, the “algorithmic information content” of the function varies with the “language” we’re using to represent them.) And, for example, in the n = 3 case shown here, the distribution of minimal rule array sizes is:

\n
\n
\n

\n

There are some functions that are difficult to represent as And+Xor rule arrays (and seem to require 15 rule elements)—and others that are easier. And this is similar to what happens if we represent Boolean functions as Boolean expressions (say in conjunctive normal form) and count the total number of (unary and binary) operations used:

\n
\n
\n

\n

OK, so we know that there is in principle an And+Xor rule array that will compute any (even) Boolean function. But now we can ask whether an adaptive evolution process can actually find such a rule array—say with a sequence of single-point mutations. Well, if we do such adaptive evolution—with a loss that counts the number of “wrong outputs” for, say, rule 254—then here’s a sequence of successive breakthrough configurations that can be produced:

\n
\n
\n

\n

The results aren’t as compact as the minimal solution above. But it seems to always be possible to find at least some And+Xor rule array that “solves the problem” just by using adaptive evolution with single-point mutations.

\n

Here are results for some other Boolean functions:

\n
\n
\n

\n

And so, yes, not only are all (even) Boolean functions representable in terms of And+Xor rule arrays, they’re also learnable in this form, just by adaptive evolution with single-point mutations.

\n

In what we did above, we were looking at how machine learning works with our rule arrays in specific cases like for the function. But now we’ve got a case where we can explicitly enumerate all possible functions, at least of a given class. And in a sense what we’re seeing is evidence that machine learning tends to be very broad—and capable at least in principle of learning pretty much any function.

\n

Of course, there can be specific restrictions. Like the And+Xor rule arrays we’re using here can’t represent (“odd”) functions where . (The Nand+First rule arrays we discussed above nevertheless can.) But in general it seems to be a reflection of the Principle of Computational Equivalence that pretty much any setup is capable of representing any function—and also adaptively “learning” it.

\n

By the way, it’s a lot easier to discuss questions about representing or learning “any function” when one’s dealing with discrete (countable) functions—because one can expect to either be able to “exactly get” a given function, or not. But for continuous functions, it’s more complicated, because one’s pretty much inevitably dealing with approximations (unless one can use symbolic forms, which are basically discrete). So, for example, while we can say (as we did above) that (ReLU) neural nets can represent any piecewise-linear function, in general we’ll only be able to imagine successively approaching an arbitrary function, much like when you progressively add more terms in a simple Fourier series:

\n
\n
\n

\n

Looking back at our results for discrete rule arrays, one notable observation that is that while we can successfully reproduce all these different Boolean functions, the actual rule array configurations that achieve this tend to look quite messy. And indeed it’s much the same as we’ve seen throughout: machine learning can find solutions, but they’re not “structured solutions”; they’re in effect just solutions that “happen to work”.

\n

Are there more structured ways of representing Boolean functions with rule arrays? Here are the two possible minimum-size And+Xor rule arrays that represent rule 30:

\n
\n
\n

\n

At the next-larger size there are more possibilities for rule 30:

\n
\n
\n

\n

And there are also rule arrays that can represent rule 110:

\n
\n
\n

\n

But in none of these cases is there obvious structure that allows us to immediately see how these computations work, or what function is being computed. But what if we try to explicitly construct—effectively by standard engineering methods—a rule array that computes a particular function? We can start by taking something like the function for rule 30 and writing it in terms of And and Xor (i.e. in ANF, or “algebraic normal form”):

\n
\n
\n

\n

We can imagine implementing this using an “evaluation graph”:

\n
\n
\n

\n

But now it’s easy to turn this into a rule array (and, yes, we haven’t gone all the way and arranged to copy inputs, etc.):

\n
\n
\n

\n

“Evaluating” this rule array for different inputs, we can see that it indeed gives rule 30:

\n
\n
\n

\n

Doing the same thing for rule 110, the And+Xor expression is

\n
\n
\n

\n

the evaluation graph is

\n
\n
\n

\n

and the rule array is:

\n
\n
\n

\n

And at least with the evaluation graph as a guide, we can readily “see what’s happening” here. But the rule array we’re using is considerably larger than our minimal solutions above—or even than the solutions we found by adaptive evolution.

\n

It’s a typical situation that one sees in many other kinds of systems (like for example sorting networks): it’s possible to have a “constructed solution” that has clear structure and regularity and is “understandable”. But minimal solutions—or ones found by adaptive evolution—tend to be much smaller. But they almost always look in many ways random, and aren’t readily understandable or interpretable.

\n

So far, we’ve been looking at rule arrays that compute specific functions. But in getting a sense of what rule arrays can do, we can consider rule arrays that are “programmable”, in that their input specifies what function they should compute. So here, for example, is an And+Xor rule array—found by adaptive evolution—that takes the “bit pattern” of any (even) Boolean function as input on the left, then applies that Boolean function to the inputs on the right:

\n
\n
\n

\n

And with this same rule array we can now compute any possible (even) Boolean function. So here, for example, it’s evaluating Or:

\n
\n
\n

Other Kinds of Models and Setups

\n

Our general goal here has been to set up models that capture the most essential features of neural nets and machine learning—but that are simple enough in their structure that we can readily “look inside” and get a sense of what they are doing. Mostly we’ve concentrated on rule arrays as a way to provide a minimal analog of standard “perceptron-style” feed-forward neural nets. But what about other architectures and setups?

\n

In effect, our rule arrays are “spacetime-inhomogeneous” generalizations of cellular automata—in which adaptive evolution determines which rule (say from a finite set) should be used at every (spatial) position and every (time) step. A different idealization (that in fact we already used in one section above) is to have an ordinary homogeneous cellular automaton—but with a single “global rule” determined by adaptive evolution. Rule arrays are the analog of feed-forward networks in which a given rule in the rule array is in effect used only once as data “flows through” the system. Ordinary homogeneous cellular automata are like recurrent networks in which a single stream of data is in effect subjected over and over again to the same rule.

\n

There are various interpolations between these cases. For example, we can imagine a “layered rule array” in which the rules at different steps can be different, but those on a given step are all the same. Such a system can be viewed as an idealization of a convolutional neural net in which a given layer applies the same kernel to elements at all positions, but different layers can apply different kernels.

\n

A layered rule array can’t encode as much information as a general rule array. But it’s still able to show machine-learning-style phenomena. And here, for example, is adaptive evolution for a layered And+Xor rule array progressively solving the problem of generating a pattern that lives for exactly 30 steps:

\n
\n
\n

\n

One could also imagine “vertically layered” rule arrays, in which different rules are used at different positions, but any given position keeps running the same rule forever. However, at least for the kinds of problems we’ve considered here, it doesn’t seem sufficient to just be able to pick the positions at which different rules are run. One seems to either need to change rules at different (time) steps, or one needs to be able to adaptively evolve the underlying rules themselves.

\n

Rule arrays and ordinary cellular automata share the feature that the value of each cell depends only on the values of neighboring cells on the step before. But in neural nets it’s standard for the value at a given node to depend on the values of lots of nodes on the layer before. And what makes this straightforward in neural nets is that (weighted, and perhaps otherwise transformed) values from previous nodes are taken to be combined just by simple numerical addition—and addition (being n-ary and associative) can take any number of “inputs”. In a cellular automaton (or Boolean function), however, there’s always a definite number of inputs, determined by the structure of the function. In the most straightforward case, the inputs come only from nearest-neighboring cells. But there’s no requirement that this is how things need to work—and for example we can pick any “local template” to bring in the inputs for our function. This template could either be the same at every position and every step, or it could be picked from a certain set differently at different positions—in effect giving us “template arrays” as well as rule arrays.

\n

So what about having a fully connected network, as we did in our very first neural net examples above? To set up a discrete analog of this we first need some kind of discrete n-ary associative “accumulator” function to fill the place of numerical addition. And for this we could pick a function like And, Or, Xor—or Majority. And if we’re not just going to end up with the same value at each node on a given layer, we need to set up some analog of a weight associated with each connection—which we can achieve by applying either Identity or Not (i.e. flip or not) to the value flowing through each connection.

\n

Here’s an example of a network of this type, trained to compute the function we discussed above:

\n
\n
\n

\n

There are just two kinds of connections here: flip and not. And at each node we’re computing the majority function—giving value 1 if the majority of its inputs are 1, and 0 otherwise. With the “one-hot encoding” of input and output that we used before, here are a few examples of how this network evaluates our function:

\n
\n
\n

\n

This was trained just using 1000 steps of single-point mutation applied to the connection types. The loss systematically goes down—but the configuration of the connection types continues to look quite random even as it achieves zero loss (i.e. even after the function has been completely learned):

\n
\n
\n

\n

In what we’ve just done we assume that all connections continue to be present, though their types (or effectively signs) can change. But we can also consider a network where connections can end up being zeroed out during training—so that they are effectively no longer present.

\n

Much of what we’ve done here with machine learning has centered around trying to learn transformations of the form x f[x]. But another typical application of machine learning is autoencoding—or in effect learning how to compress data representing a certain set of examples. And once again it’s possible to do such a task using rule arrays, with learning achieved by a series of single-point mutations.

\n

As a starting point, consider training a rule array (of cellular automaton rules 4 and 146) to reproduce unchanged a block of black cells of any width. One might have thought this would be trivial. But it’s not, because in effect the initial data inevitably gets “ground up” inside the rule array, and has to be reconstituted at the end. But, yes, it’s nevertheless possible to train a rule array to at least roughly do this—even though once again the rule arrays we find that manage to do this look quite random:

\n
\n
\n

\n

But to set up a nontrivial autoencoder let’s imagine that we progressively “squeeze” the array in the middle, creating an increasingly narrow “bottleneck” through which the data has to flow. At the bottleneck we effectively have a compressed version of the original data. And we find that at least down to some width of bottleneck, it’s possible to create rule arrays that—with reasonable probability—can act as successful autoencoders of the original data:

\n
\n
\n

\n

The success of LLMs has highlighted the use of machine learning for sequence continuation—and the effectiveness of transformers for this. But just as with other neural nets, the forms of transformers that are used in practice are typically very complicated. But can one find a minimal model that nevertheless captures the “essence of transformers”?

\n

Let’s say that we have a sequence that we want to continue, like:

\n
\n
\n

\n

We want to encode each possible value by a vector, as in

\n
\n
\n

\n

so that, for example, our original sequence is encoded as:

\n
\n
\n

\n

Then we have a “head” that reads a block of consecutive vectors, picking off certain values and feeding pairs of them into And and Xor functions, to get a vector of Boolean values:

\n
\n
\n

\n

Ultimately this head is going to “slide” along our sequence, “predicting” what the next element in the sequence will be. But somehow we have to go from our vector of Boolean values to (probabilities of) sequence elements. Potentially we might be able to do this just with a rule array. But for our purposes here we’ll use a fully connected single-layer Identity+Not network in which at each output node we just find the sum of the number of values that come to it—and treat this as determining (through a softmax) the probability of the corresponding element:

\n
\n
\n

\n

In this case, the element with the maximum value is 5, so at “zero temperature” this would be our “best prediction” for the next element.

\n

To train this whole system we just make a sequence of random point mutations to everything, keeping mutations that don’t increase the loss (where the loss is basically the difference between predicted next values and actual next values, or, more precisely, the “categorical cross-entropy”). Here’s how this loss progresses in a typical such training:

\n
\n
\n

\n

At the end of this training, here are the components of our minimal transformer:

\n
\n
\n

\n

First come the encodings of the different possible elements in the sequence. Then there’s the head, here shown applied to the encoding of the first elements of the original sequence. Finally there’s a single-layer discrete network that takes the output from the head, and deduces relative probabilities for different elements to come next. In this case the highest-probability prediction for the next element is that it should be element 6.

\n

To do the analog of an LLM we start from some initial “prompt”, i.e. an initial sequence that fits within the width (“context window”) of the head. Then we progressively apply our minimal transformer, for example at each step taking the next element to be the one with the highest predicted probability (i.e. operating “at zero temperature”). With this setup the collection of “prediction strengths” is shown in gray, with the “best prediction” shown in red:

\n
\n
\n

\n

Running this even far beyond our original training data, we see that we get a “prediction” of a continued sine wave:

\n
\n
\n

\n

As we might expect, the fact that our minimal transformer can make such a plausible prediction relies on the simplicity of our sine curve. If we use “more complicated” training data, such as the “mathematically defined” () blue curve in

\n
\n
\n

\n

the result of training and running a minimal transformer is now:

\n
\n
\n

\n
\n
\n

\n

And, not surprisingly, it can’t “figure out the computation” to correctly continue the curve. By the way, different training runs will involve different sequences of mutations, and will yield different predictions (often with periodic “hallucinations”):

\n
\n
\n

\n

In looking at “perceptron-style” neural nets we wound up using rule arraysor, in effect, spacetime-inhomogeneous cellular automataas our minimal models. Here we’ve ended up with a slightly more complicated minimal model for transformer neural nets. But if we were to simplify it further, we would end up not with something like a cellular automaton but instead with something like a tag system, in which one has a sequence of elements, and at each step removes a block from the beginning, anddepending on its formadds a certain block at the end, as in:

\n
\n
\n

\n

And, yes, such systems can generate extremely complex behaviorreinforcing the idea (that we have repeatedly seen here) that machine learning works by selecting complexity that aligns with goals that have been set.

\n

And along these lines, one can consider all sorts of different computational systems as foundations for machine learning. Here we’ve been looking at cellular-automaton-like and tag-system-like examples. But for example our Physics Project has shown us the power and flexibility of systems based on hypergraph rewriting. And from what we’ve seen here, it seems very plausible that something like hypergraph rewriting can serve as a yet more powerful and flexible substrate for machine learning.

\n

So in the End, What’s Really Going On in Machine Learning?

\n

There are, I think, several quite striking conclusions from what we’ve been able to do here. The first is just that models much simpler than traditional neural nets seem capable of capturing the essential features of machine learning—and indeed these models may well be the basis for a new generation of practical machine learning.

\n

But from a scientific point of view, one of the things that’s important about these models is that they are simple enough in structure that it’s immediately possible to produce visualizations of what they’re doing inside. And studying these visualizations, the most immediately striking feature is how complicated they look.

\n

It could have been that machine learning would somehow “crack systems”, and find simple representations for what they do. But that doesn’t seem to be what’s going on at all. Instead what seems to be happening is that machine learning is in a sense just “hitching a ride” on the general richness of the computational universe. It’s not “specifically building up behavior one needs”; rather what it’s doing is to harness behavior that’s “already out there” in the computational universe.

\n

The fact that this could possibly work relies on the crucial—and at first unexpected—fact that in the computational universe even very simple programs can ubiquitously produce all sorts of complex behavior. And the point then is that this behavior has enough richness and diversity that it’s possible to find instances of it that align with machine learning objectives one’s defined. In some sense what machine learning is doing is to “mine” the computational universe for programs that do what one wants.

\n

It’s not that machine learning nails a specific precise program. Rather, it’s that in typical successful applications of machine learning there are lots of programs that “do more or less the right thing”. If what one’s trying to do involves something computationally irreducible, machine learning won’t typically be able to “get well enough aligned” to correctly “get through all the steps” of the irreducible computation. But it seems that many “human-like tasks” that are the particular focus of modern machine learning can successfully be done.

\n

And by the way, one can expect that with the minimal models explored here, it becomes more feasible to get a real characterization of what kinds of objectives can successfully be achieved by machine learning, and what cannot. Critical to the operation of machine learning is not only that there exist programs that can do particular kinds of things, but also that they can realistically be found by adaptive evolution processes.

\n

In what we’ve done here we’ve often used what’s essentially the very simplest possible process for adaptive evolution: a sequence of point mutations. And what we’ve discovered is that even this is usually sufficient to lead us to satisfactory machine learning solutions. It could be that our paths of adaptive evolution would always be getting stuck—and not reaching any solution. But the fact that this doesn’t happen seems crucially connected to the computational irreducibility that’s ubiquitous in the systems we’re studying, and that leads to effective randomness that with overwhelming probability will “give us a way out” of anywhere we got stuck.

\n

In some sense computational irreducibility “levels the playing field” for different processes of adaptive evolution, and lets even simple ones be successful. Something similar seems to happen for the whole framework we’re using. Any of a wide class of systems seem capable of successful machine learning, even if they don’t have the detailed structure of traditional neural nets. We can see this as a typical reflection of the Principle of Computational Equivalence: that even though systems may differ in their details, they are ultimately all equivalent in the computations they can do.

\n

The phenomenon of computational irreducibility leads to a fundamental tradeoff, of particular importance in thinking about things like AI. If we want to be able to know in advance—and broadly guarantee—what a system is going to do or be able to do, we have to set the system up to be computationally reducible. But if we want the system to be able to make the richest use of computation, it’ll inevitably be capable of computationally irreducible behavior. And it’s the same story with machine learning. If we want machine learning to be able to do the best it can, and perhaps give us the impression of “achieving magic”, then we have to allow it to show computational irreducibility. And if we want machine learning to be “understandable” it has to be computationally reducible, and not able to access the full power of computation.

\n

At the outset, though, it’s not obvious whether machine learning actually has to access such power. It could be that there are computationally reducible ways to solve the kinds of problems we want to use machine learning to solve. But what we’ve discovered here is that even in solving very simple problems, the adaptive evolution process that’s at the heart of machine learning will end up sampling—and using—what we can expect to be computationally irreducible processes.

\n

Like biological evolution, machine learning is fundamentally about finding things that work—without the constraint of “understandability” that’s forced on us when we as humans explicitly engineer things step by step. Could one imagine constraining machine learning to make things understandable? To do so would effectively prevent machine learning from having access to the power of computationally irreducible processes, and from the evidence here it seems unlikely that with this constraint the kind of successes we’ve seen in machine learning would be possible.

\n

So what does this mean for the “science of machine learning”? One might have hoped that one would be able to “look inside” machine learning systems and get detailed narrative explanations for what’s going on; that in effect one would be able to “explain the mechanism” for everything. But what we’ve seen here suggests that in general nothing like this will work. All one will be able to say is that somewhere out there in the computational universe there’s some (typically computationally irreducible) process that “happens” to be aligned with what we want.

\n

Yes, we can make general statements—strongly based on computational irreducibility—about things like the findability of such processes, say by adaptive evolution. But if we ask “How in detail does the system work?”, there won’t be much of an answer to that. Of course we can trace all its computational steps and see that it behaves in a certain way. But we can’t expect what amounts to a “global human-level explanation” of what it’s doing. Rather, we’ll basically just be reduced to looking at some computationally irreducible process and observing that it “happens to work”—and we won’t have a high-level explanation of “why”.

\n

But there is one important loophole to all this. Within any computationally irreducible system, there are always inevitably pockets of computational reducibility. And—as I’ve discussed at length particularly in connection with our Physics Project—it’s these pockets of computational reducibility that allow computationally bounded observers like us to identify things like “laws of nature” from which we can build “human-level narratives”.

\n

So what about machine learning? What pockets of computational reducibility show up there, from which we might build “human-level scientific laws”? Much as with the emergence of “simple continuum behavior” from computationally irreducible processes happening at the level of molecules in a gas or ultimate discrete elements of space, we can expect that at least certain computationally reducible features will be more obvious when one’s dealing with larger numbers of components. And indeed in sufficiently large machine learning systems, it’s routine to see smooth curves and apparent regularity when one’s looking at the kind of aggregated behavior that’s probed by things like training curves.

\n

But the question about pockets of reducibility is always whether they end up being aligned with things we consider interesting or useful. Yes, it could be that machine learning systems would exhibit some kind of collective (“EEG-like”) behavior. But what’s not clear is whether this behavior will tell us anything about the actual “information processing” (or whatever) that’s going on in the system. And if there is to be a “science of machine learning” what we have to hope for is that we can find in machine learning systems pockets of computational reducibility that are aligned with things we can measure, and care about.

\n

So given what we’ve been able to explore here about the foundations of machine learning, what can we say about the ultimate power of machine learning systems? A key observation has been that machine learning works by “piggybacking” on computational irreducibility—and in effect by finding “natural pieces of computational irreducibility” that happen to fit with the objectives one has. But what if those objectives involve computational irreducibility—as they often do when one’s dealing with a process that’s been successfully formalized in computational terms (as in math, exact science, computational X, etc.)? Well, it’s not enough that our machine learning system “uses some piece of computational irreducibility inside”. To achieve a particular computationally irreducible objective, the system would have to do something closely aligned with that actual, specific objective.

\n

It has to be said, however, that by laying bare more of the essence of machine learning here, it becomes easier to at least define the issues of merging typical “formal computation” with machine learning. Traditionally there’s been a tradeoff between the computational power of a system and its trainability. And indeed in terms of what we’ve seen here this seems to reflect the sense that “larger chunks of computational irreducibility” are more difficult to fit into something one’s incrementally building up by a process of adaptive evolution.

\n

So how should we ultimately think of machine learning? In effect its power comes from leveraging the “natural resource” of computational irreducibility. But when it uses computational irreducibility it does so by “foraging” pieces that happen to advance its objectives. Imagine one’s building a wall. One possibility is to fashion bricks of a particular shape that one knows will fit together. But another is just to look at stones one sees lying around, then to build the wall by fitting these together as best one can.

\n

And if one then asks “Why does the wall have such-and-such a pattern?” the answer will end up being basically “Because that’s what one gets from the stones that happened to be lying around”. There’s no overarching theory to it in itself; it’s just a reflection of the resources that were out there. Or, in the case of machine learning, one can expect that what one sees will be to a large extent a reflection of the raw characteristics of computational irreducibility. In other words, the foundations of machine learning are as much as anything rooted in the science of ruliology. And it’s in large measure to that science we should look in our efforts to understand more about “what’s really going on” in machine learning, and quite possibly also in neuroscience.

\n

Historical & Personal Notes

\n

In some ways it seems like a quirk of intellectual history that the kinds of foundational questions I’ve been discussing here weren’t already addressed long ago—and in some ways it seems like an inexorable consequence of the only rather recent development of certain intuitions and tools.

\n

The idea that the brain is fundamentally made of connected nerve cells was considered in the latter part of the nineteenth century, and took hold in the first decades of the twentieth century—with the formalized concept of a neural net that operates in a computational way emerging in full form in the work of Warren McCulloch and Walter Pitts in 1943. By the late 1950s there were hardware implementations of neural nets (typically for image processing) in the form of “perceptrons”. But despite early enthusiasm, practical results were mixed, and at the end of the 1960s it was announced that simple cases amenable to mathematical analysis had been “solved”—leading to a general belief that “neural nets couldn’t do anything interesting”.

\n

Ever since the 1940s there had been a trickle of general analyses of neural nets, particularly using methods from physics. But typically these analyses ended up with things like continuum approximations—that could say little about the information-processing aspects of neural nets. Meanwhile, there was an ongoing undercurrent of belief that somehow neural networks would both explain and reproduce how the brain works—but no methods seemed to exist to say quite how. Then at the beginning of the 1980s there was a resurgence of interest in neural networks, coming from several directions. Some of what was done concentrated on very practical efforts to get neural nets to do particular “human-like” tasks. But some was more theoretical, typically using methods from statistical physics or dynamical systems.

\n

Before long, however, the buzz died down, and for several decades only a few groups were left working with neural nets. Then in 2011 came a surprise breakthrough in using neural nets for image analysis. It was an important practical advance. But it was driven by technological ideas and development—not any significant new theoretical analysis or framework.

\n

And this was also the pattern for almost all of what followed. People spent great effort to come up with neural net systems that worked—and all sorts of folklore grew up about how this should best be done. But there wasn’t really even an attempt at an underlying theory; this was a domain of engineering practice, not basic science.

\n

And it was in this tradition that ChatGPT burst onto the scene in late 2022. Almost everything about LLMs seemed to be complicated. Yes, there were empirically some large-scale regularities (like scaling laws). And I quickly suspected that the success of LLMs was a strong hint of general regularities in human language that hadn’t been clearly identified before. But beyond a few outlier examples, almost nothing about “what’s going on inside LLMs” has seemed easy to decode. And efforts to put “strong guardrails” on the operation of the system—in effect so as to make it in some way “predictable” or “understandable”—typically seem to substantially decrease its power (a point that now makes sense in the context of computational irreducibility).

\n

My own interaction with machine learning and neural nets began in 1980 when I was developing my SMP symbolic computation system, and wondering whether it might be possible to generalize the symbolic pattern-matching foundations of the system to some kind of “fuzzy pattern matching” that would be closer to human thinking. I was aware of neural nets but thought of them as semi-realistic models of brains, not for example as potential sources of algorithms of the kind I imagined might “solve” fuzzy matching.

\n

And it was partly as a result of trying to understand the essence of systems like neural nets that in 1981 I came up with what I later learned could be thought of as one-dimensional cellular automata. Soon I was deeply involved in studying cellular automata and developing a new intuition about how complex behavior could arise even from simple rules. But when I learned about recent efforts to make idealized models of neural nets using ideas from statistical mechanics, I was at least curious enough to set up simulations to try to understand more about these models.

\n

But what I did wasn’t a success. I could neither get the models to do anything of significant practical interest—nor did I manage to derive any good theoretical understanding of them. I kept wondering, though, what relationship there might be between cellular automata that “just run”, and systems like neural nets that can also “learn”. And in fact in 1985 I tried to make a minimal cellular-automaton-based model to explore this. It was what I’m now calling a “vertically layered rule array”. And while in many ways I was already asking the right questions, this was an unfortunate specific choice of system—and my experiments on it didn’t reveal the kinds of phenomena we’re now seeing.

\n

Years went by. I wrote a section on “Human Thinking” in A New Kind of Science, that discussed the possibility of simple foundational rules for the essence of thinking, and even included a minimal discrete analog of a neural net. At the time, though, I didn’t develop these ideas. By 2017, though, 15 years after the book was published—and knowing about the breakthroughs in deep learning—I had begun to think more concretely about neural nets as getting their power by sampling programs from across the computational universe. But still I didn’t see quite how this would work.

\n

Meanwhile, there was a new intuition emerging from practical experience with machine learning: that if you “bashed” almost any system “hard enough”, it would learn. Did that mean that perhaps one didn’t need all the details of neural networks to successfully do machine learning? And could one perhaps make a system whose structure was simple enough that its operation would for example be accessible to visualization? I particularly wondered about this when I was writing an exposition of ChatGPT and LLMs in early 2023. And I kept talking about “LLM science”, but didn’t have much of a chance to work on it.

\n

But then, a few months ago, as part of an effort to understand the relation between what science does and what AI does, I tried a kind of “throwaway experiment”—which, to my considerable surprise, seemed to successfully capture some of the essence of what makes biological evolution possible. But what about other adaptive evolution—and in particular, machine learning? The models that seemed to be needed were embarrassingly close to what I’d studied in 1985. But now I had a new intuition—and, thanks to Wolfram Language, vastly better tools. And the result has been my effort here.

\n

Of course this is only a beginning. But I’m excited to be able to see what I consider to be the beginnings of foundational science around machine learning. Already there are clear directions for practical applications (which, needless to say, I plan to explore). And there are signs that perhaps we may finally be able to understand just why—and when—the “magic” of machine learning works.

\n

Thanks

\n

Thanks to Richard Assar of the Wolfram Institute for extensive help. Thanks also to Brad Klee, Tianyi Gu, Nik Murzin and Max Niederman for specific results, to George Morgan and others at Symbolica for their early interest, and to Kovas Boguta for suggesting many years ago to link machine learning to the ideas in A New Kind of Science.

\n", + "category": "Artificial Intelligence", + "link": "https://writings.stephenwolfram.com/2024/08/whats-really-going-on-in-machine-learning-some-minimal-models/", + "creator": "Stephen Wolfram", + "pubDate": "Thu, 22 Aug 2024 18:28:17 +0000", + "enclosure": "", + "enclosureType": "", + "image": "", + "id": "", + "language": "en", + "folder": "", + "feed": "wolfram", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "b0eee7b26b7649f86009add905dd9d8c", + "highlights": [] + }, + { + "title": "Yet More New Ideas and New Functions: Launching Version 14.1 of Wolfram Language & Mathematica", + "description": "\"\"For the 36th Time… the Latest from Our R&D Pipeline There’s Now a Unified Wolfram App Vector Databases and Semantic Search RAGs and Dynamic Prompting for LLMs Connect to Your Favorite LLM Symbolic Arrays and Their Calculus Binomials and Pitchforks: Navigating Mathematical Conventions Fixed Points and Stability for Differential and Difference Equations The Steady Advance […]", + "content": "\"\"\n\n

For the 36th Time… the Latest from Our R&D Pipeline

\n

Today we celebrate the arrival of the 36th (x.x) version of the Wolfram Language and Mathematica: Version 14.1. We’ve been doing this since 1986: continually inventing new ideas and implementing them in our larger and larger tower of technology. And it’s always very satisfying to be able to deliver our latest achievements to the world.

\n

We released Version 14.0 just half a year ago. And—following our modern version scheduling—we’re now releasing Version 14.1. For most technology companies a .1 release would contain only minor tweaks. But for us it’s a snapshot of what our whole R&D pipeline has delivered—and it’s full of significant new features and new enhancements.

\n

If you’ve been following our livestreams, you may have already seen many of these features and enhancements being discussed as part of our open software design process. And we’re grateful as always to members of the Wolfram Language community who’ve made suggestions—and requests. And in fact Version 14.1 contains a particularly large number of long-requested features, some of which involved development that has taken many years and required many intermediate achievements.

\n

There’s lots of both extension and polishing in Version 14.1. There are a total of 89 entirely new functions—more than in any other version for the past couple of years. And there are also 137 existing functions that have been substantially updated. Along with more than 1300 distinct bug fixes and specific improvements.

\n

Some of what’s new in Version 14.1 relates to AI and LLMs. And, yes, we’re riding the leading edge of these kinds of capabilities. But the vast majority of what’s new has to do with our continued mission to bring computational language and computational knowledge to everything. And today that mission is even more important than ever, supporting not only human users, but also rapidly proliferating AI “users”—who are beginning to be able to routinely make even broader and deeper use of our technology than humans.

\n

Each new version of Wolfram Language represents a large amount of R&D by our team, and the encapsulation of a surprisingly large number of ideas about what should be implemented, and how it should be implemented. So, today, here it is: the latest stage in our four-decade journey to bring the superpower of the computational paradigm to everything.

\n

There’s Now a Unified Wolfram App

\n

In the beginning we just had “Mathematica”—that we described as “A System for Doing Mathematics by Computer”. But the core of “Mathematica”—based on the very general concept of transformations for symbolic expressions—was always much broader than “mathematics”. And it didn’t take long before “mathematics” was an increasingly small part of the system we had built. We agonized for years about how to rebrand things to better reflect what the system had become. And eventually, just over a decade ago, we did the obvious thing, and named what we had “the Wolfram Language”.

\n

But when it came to actual software products and executables, so many people were familiar with having a “Mathematica” icon on their desktop that we didn’t want to change that. Later we introduced Wolfram|One as a general product supporting Wolfram Language across desktop and cloud—with Wolfram Desktop being its desktop component. But, yes, it’s all been a bit confusing. Ultimately there’s just one “bag of bits” that implements the whole system we’ve built, even though there are different usage patterns, and differently named products that the system supports. Up to now, each of these different products has been a different executable, that’s separately downloaded.

\n

But starting with Version 14.1 we’re unifying all these things—so that now there’s just a single unified Wolfram app, that can be configured and activated in different ways corresponding to different products.

\n

So now you just go to wolfram.com/download-center and download the Wolfram app:

\n

Wolfram app

\n

After you’ve installed the app, you activate it as whatever product(s) you’ve got: Wolfram|One, Mathematica, Wolfram|Alpha Notebook Edition, etc. Why have separate products? Each one has a somewhat different usage pattern, and provides a somewhat different interface optimized for that usage pattern. But now the actual downloading of bits has been unified; you just have to download the unified Wolfram app and you’ll get what you need.

\n

Vector Databases and Semantic Search

\n

Let’s say you’ve got a million documents (or webpages, or images, or whatever) and you want to find the ones that are “closest” to something. Version 14.1 now has a function—SemanticSearch—for doing this. How does SemanticSearch work? Basically it uses machine learning methods to find “vectors” (i.e. lists) of numbers that somehow represent the “meaning” of each of your documents. Then when you want to know which documents are “closest” to something, SemanticSearch computes the vector for the something, and then sees which of the document vectors are closest to this vector.

\n

In principle one could use Nearest to find closest vectors. And indeed this works just fine for small examples where one can readily store all the vectors in memory. But SemanticSearch uses a full industrial-strength approach based on the new vector database capabilities of Version 14.1—which can work with huge collections of vectors stored in external files.

\n

There are lots of ways to use both SemanticSearch and vector databases. You can use them to find documents, snippets within documents, images, sounds or anything else whose “meaning” can somehow be captured by a vector of numbers. Sometimes the point is to retrieve content directly for human consumption. But a particularly strong modern use case is to set up “retrieval-augmented generation” (RAG) for LLMs—in which relevant content found with a vector database is used to provide a “dynamic prompt” for the LLM. And indeed in Version 14.1—as we’ll discuss later—we now have LLMPromptGenerator to implement exactly this pipeline.

\n

But let’s come back to SemanticSearch on its own. Its basic design is modeled after TextSearch, which does keyword-based searching of text. (Note, though, that SemanticSearch also works on many things other than text.)

\n

In direct analogy to CreateSearchIndex for TextSearch, there’s now a CreateSemanticSearchIndex for SemanticSearch. Let’s do a tiny example to see how it works. Essentially we’re going to make an (extremely restricted) “inverse dictionary”. We set up a list of definition word elements:

\n
\n
\n

\n

Now create a semantic search index from this:

\n
\n
\n

\n

Behind the scenes this is a vector database. But we can access it with SemanticSearch:

\n
\n
\n

\n

And since “whale” is considered closest, it comes first.

\n

What about a more realistic example? Instead of just using 3 words, let’s set up definitions for all words in the dictionary. It takes a little while (like a few minutes) to do the machine learning feature extraction for all the definitions. But in the end you get a new semantic search index:

\n
\n
\n

\n

This time it has 39,186 entries—but SemanticSearch picks out the (by default) 10 that it considers closest to what you asked for (and, yes, there’s an archaic definition of “seahorse” as “walrus”):

\n
\n
\n

\n

We can see a bit more detail about what’s going on by asking SemanticSearch to explicitly show us distances:

\n
\n
SemanticSearch distances
\n

\n

And plotting these we can see that “whale” is the winner by a decent margin:

\n
\n
\n

\n

One subtlety when dealing with semantic search indices is where to store them. When they’re sufficiently small, you can store them directly in memory, or in a notebook. But usually you’ll want to store them in a separate file, and if you want to share an index you’ll want to put this file in the cloud. You can do this either interactively from within a notebook

\n

SemanticSearchIndex

\n

or programmatically:

\n
\n
\n

\n

And now the SemanticSearchIndex object you have can be used by anyone, with its data being accessed in the cloud.

\n

In most cases SemanticSearch will be what you need. But sometimes it’s worthwhile to “go underneath” and directly work with vector databases. Here’s a collection of small vectors:

\n
\n
\n

\n

We can use Nearest to find the nearest vector to one we give:

\n
\n
\n

\n

But we can also do this with a vector database. First we create the database:

\n
\n
\n

\n

And now we can search for the nearest vector to the one we give:

\n
\n
\n

\n

In this case we get exactly the same answer as from Nearest. But whereas the mission of Nearest is to give us the mathematically precise nearest vector, VectorDatabaseSearch is doing something less precise—but is able to do it for extremely large numbers of vectors that don’t need to be stored directly in memory.

\n

Those vectors can come from anywhere. For example, here they’re coming from extracting features from some images:

\n
\n
\n

\n

Now let’s say we’ve got a specific image. Then we can search our vector database to get the image whose feature vector is closest to the one for the image we provided:

\n
\n
\n

\n

And, yes, this works for other kinds of objects too:

\n
\n
\n

\n
\n
\n

\n

CreateSemanticSearchIndex and CreateVectorDatabase create vector databases from scratch using data you provide. But—just like with text search indices—an important feature of vector databases is that you can incrementally add to them. So, for example, UpdateSemanticSearchIndex and AddToVectorDatabase let you efficiently add individual entries or lists of entries to vector databases.

\n

In addition to providing capabilities for building (and growing) your own vector databases, there are several pre-built vector databases that are now available in the Wolfram Data Repository:

\n

Vector Databases

\n

So now we can use a pre-built vector database of Wolfram Language function documentation to do a semantic search for snippets that are “semantically close” to being about iterating functions:

\n
\n
\n

\n

(In the next section, we’ll see how to actually “synthesize a report” based on this.)

\n

The basic function of SemanticSearch is to determine what “chunks of content” are closest to what you are asking about. But given a semantic search index (AKA vector database) there are also other important things you can do. One of them is to use TextSummarize to ask not for specific chunks but rather for some kind of overall summary of what can be said about a given topic from the content in the semantic search index:

\n
\n
\n

\n

RAGs and Dynamic Prompting for LLMs

\n

How does one tell an LLM what one wants it to do? Fundamentally, one provides a prompt, and then the LLM generates output that “continues” that prompt. Typically the last part of the prompt is the specific question (or whatever) that a user is asking. But before that, there’ll be “pre-prompts” that prime the LLM in various ways to determine how it should respond.

\n

In Version 13.3 in mid-2023 (i.e. a long time ago in the world of LLMs!) we introduced LLMPrompt as a symbolic way to specify a prompt, and we launched the Wolfram Prompt Repository as a broad source for pre-built prompts. Here’s an example of using LLMPrompt with a prompt from the Wolfram Prompt Repository:

\n
\n
\n

\n

In its simplest form, LLMPrompt just adds fixed text to “pre-prompt” the LLM. LLMPrompt is also set up to take arguments that modify the text it’s adding:

\n
\n
\n

\n

But what if one wants the LLM to be pre-prompted in a way that depends on information that’s only available once the user actually asks their question (like, for example, the text of the question itself)? In Version 14.1 we’re adding LLMPromptGenerator to dynamically generate pre-prompts. And it turns out that this kind of “dynamic prompting” is remarkably powerful, and—particularly together with tool calling—opens up a whole new level of capabilities for LLMs.

\n

For example, we can set up a prompt generator that produces a pre-prompt that gives the registered name of the user, so the LLM can use this information when generating its answer:

\n
\n
\n

\n

Or for example here the prompt generator is producing a pre-prompt about sunrise, sunset and the current time:

\n
\n
\n

\n

And, yes, if the pre-prompt contains extra information (like about the Moon) the LLM will (probably) ignore it:

\n
\n
\n

\n

As another example, we can take whatever the user asks, and first do a web search on it, then include as a pre-prompt snippets we get from the web. The result is that we can get answers from the LLM that rely on specific “web knowledge” that we can’t expect will be “known in detail” by the raw LLM:

\n
\n
\n

\n

But often one doesn’t want to just “search at random on the web”; instead one wants to systematically retrieve information from some known source to give as “briefing material” to the LLM to help it in generating its answer. And a typical way to implement this kind of “retrieval-augmented generation (RAG)” is to set up an LLMPromptGenerator that uses the SemanticSearch and vector database capabilities that we introduced in Version 14.1.

\n

So, for example, here’s a semantic search index generated from my (rather voluminous) writings:

\n
\n
\n

\n

By setting up a prompt generator based on this, I can now ask the LLM “personal questions”:

\n
\n
\n

\n

How did the LLM “know that”? Internally the prompt generator used SemanticSearch to generate a collection of snippets, which the LLM then “trawled through” to produce a specific answer:

\n
\n
\n

\n

It’s already often very useful just to “retrieve static text” to “brief” the LLM. But even more powerful is to brief the LLM with what it needs to call tools that can do further computation, etc. So, for example, if you want the LLM to write and run Wolfram Language code that uses functions you’ve created, you can do that by having it first “read the documentation” for those functions.

\n

As an example, this uses a prompt generator that uses a semantic search index built from the Wolfram Function Repository:

\n
\n
\n

\n

Connect to Your Favorite LLM

\n

There are now many ways to use LLM functionality from within the Wolfram Language, and Wolfram Notebooks. You can do it programmatically, with LLMFunction, LLMSynthesize, etc. You can do it interactively through Chat Notebooks and related chat capabilities.

\n

But (at least for now) there’s no full-function LLM built directly into the Wolfram Language. So that means that (at least for now) you have to choose your “flavor” of external LLM to power Wolfram Language LLM functionality. And in Version 14.1 we have support for basically all major available foundation-model LLMs.

\n

We’ve made it as straightforward as possible to set up connections to external LLMs. Once you’ve done it, you can select what you want directly in any Chat Notebook

\n

Choose your LLM

\n

or from your global Preferences:

\n

LLM global preferences

\n

When you’re using a function you specify the “model” (i.e. service and specific model name) as part of the setting for LLMEvaluator:

\n
\n
\n

\n

In general you can use LLMConfiguration to define the whole configuration of an LLM you want to connect to, and you can make a particular configuration your default either interactively using Preferences, or by explicitly setting the value of $LLMEvaluator.

\n

So how do you initially set up a connection to a new LLM? You can do it interactively by pressing Connect in the AI Settings pane of Preferences. Or you can do it programmatically using ServiceConnect:

\n

ServiceConnect

\n

At the “ServiceConnect level” you have very direct access to the features of LLM APIs, though unless you’re studying LLM APIs you probably won’t need to use these. But talking of LLM APIs, one of the things that’s now easy to do with Wolfram Language is to compare LLMs, for example programmatically sending the same question to multiple LLMs:

\n
\n
\n

\n

And in fact we’ve recently started posting weekly results that we get from a full range of LLMs on the task of writing Wolfram Language code (conveniently, the exercises in my book An Elementary Introduction to the Wolfram Language have textual “prompts”, and we have a well-developed system that we’ve used for many years in assessing code for the online course based on the book):

\n

Wolfram LLM Benchmarking Project

\n

Symbolic Arrays and Their Calculus

\n

I want A to be an n×n matrix. I don’t want to say what its elements are, and I don’t even want to say what n is. I just want to have a way to treat the whole thing symbolically. Well, in Version 14.1 we’ve introduced MatrixSymbol to do that.

\n

A MatrixSymbol has a name (just like an ordinary symbol)—and it has a way to specify its dimensions. We can use it, for example, to set up a symbolic representation for our matrix A:

\n
\n
\n

\n

Hovering over this in a notebook, we’ll get a tooltip that explains what it is:

\n

Matrix dimensions tooltip

\n

We can ask for its dimensions as a tensor:

\n
\n
\n

\n

Here’s its inverse, again represented symbolically:

\n
\n
\n

\n

That also has dimensions n×n:

\n
\n
\n

\n

In Version 14.1 you can not only have symbolic matrices, you can also have symbolic vectors and, for that matter, symbolic arrays of any rank. Here’s a length-n symbolic vector (and, yes, we can have a symbolic vector named v that we assign to a symbol v):

\n
\n
\n

\n
\n
\n

\n

So now we can construct something like the quadratic form:

\n
\n
\n

\n

A classic thing to compute from this is its gradient with respect to the vector :

\n
\n
\n

\n

And actually this is just the same as the “vector derivative”:

\n
\n
\n

\n

If we do a second derivative we get:

\n
\n
\n

\n

What happens if we differentiate with respect to ? Well, then we get a symbolic identity matrix

\n
\n
\n

\n

which again has dimensions n×n:

\n
\n
\n

\n

is a rank-2 example of a symbolic identity array:

\n
\n
\n

\n

If we give n an explicit value, we can get an explicit componentwise array:

\n
\n
\n

\n

Let’s say we have a function of , like Total. Once again we can find the derivative with respect to :

\n
\n
\n

\n

And now we see another symbolic array construct: SymbolicOnesArray:

\n
\n
\n

\n

This is simply an array whose elements are all 1:

\n
\n
\n

\n

Differentiating a second time gives us a SymbolicZerosArray:

\n
\n
\n

\n

Although we’re not defining explicit elements for , it’s sometimes important to specify, for example, that all the elements are reals:

\n
\n
\n

\n

For a vector whose elements are reals, it’s straightforward to find the derivative of the norm:

\n
\n
\n

\n

The third derivative, though, is a bit more complicated:

\n
\n
\n

\n

The ⊗ here is TensorProduct, and the T:(1,3,2) represents Transpose[..., {1, 3, 2}].

\n

In the Wolfram Language, a symbol, say s, can stand on its own, and represent a “variable”. It can also appear as a head—as in s[x]—and represent a function. And the same is true for vector and matrix symbols:

\n
\n
\n

\n

Importantly, the chain rule also works for matrix and vector functions:

\n
\n
\n

\n

Things get a bit trickier when one’s dealing with functions of matrices:

\n
\n
\n

\n

The here represents ArrayDot[..., ..., 2], which is a generalization of Dot. Given two arrays u and v, Dot will contract the last index of u with the first index of v:

\n
\n
\n

\n

ArrayDot[u, v, n], on the other hand, contracts the last n indices of u with the first n of v. ArrayDot[u, v, 1] is just the same as Dot[u, v]:

\n
\n
\n

\n

But now in this particular example all the indices get “contracted out”:

\n
\n
\n

\n

We’ve talked about symbolic vectors and matrices. But—needless to say—what we have is completely general, and will work for arrays of any rank. Here’s an example of a p×q×r array:

\n
\n
\n

\n

The overscript indicates that this is an array of rank 3.

\n

When one takes derivatives, it’s very easy to end up with high-rank arrays. Here’s the result of differentiating with respect to a matrix:

\n
\n
\n

\n

is a rank-4 n×n×n×n identity array.

\n

When one’s dealing with higher-rank objects, there’s one more construct that appears—that we call SymbolicDeltaProductArray. Let’s set up a rank-3 array with dimensions 3×3×3:

\n
\n
\n

\n

Now let’s compute a derivative:

\n
\n
\n

\n

The result is a rank-5 array that’s effectively a combination of two KroneckerDelta objects for indices 1,4 and 2,5, respectively:

\n
\n
\n

\n

We can visualize this with ArrayPlot3D:

\n
\n
\n

\n

The most common way to deal with arrays in the Wolfram Language has always been in terms of explicit lists of elements. And in this representation it’s extremely convenient that operations are normally done elementwise:

\n
\n
\n

\n

Non-lists are then by default treated as scalars—and for example here added into every element:

\n
\n
\n

\n

But now there’s something new, namely symbolic arrays—which in effect implicitly contain multiple list elements, and thus can’t be “added into every element”:

\n
\n
\n

\n

This is what happens when we have an “ordinary scalar” together with a symbolic vector:

\n
\n
\n

\n

How does this work? “Under the hood” there’s a new attribute NonThreadable which specifies that certain heads (like ArraySymbol) shouldn’t be threaded by Listable functions (like Plus).

\n

By the way, ever since Version 9 a dozen years ago we’ve had a limited mechanism for assuming that symbols represent vectors, matrices or arrays—and now that mechanism interoperates with all our new symbolic array functionality:

\n
\n
\n

\n

When you’re doing explicit computations there’s often no choice but to deal directly with individual array elements. But it turns out that there are all sorts of situations where it’s possible to work instead in terms of “whole” vectors, matrices, etc. And indeed in the literature of fields like machine learning, optimization, statistics and control theory, it’s become quite routine to write down formulas in terms of symbolic vectors, matrices, etc. And what Version 14.1 now adds is a streamlined way to compute in terms of these symbolic array constructs.

\n

The results are often very elegant. So, for example, here’s how one might set up a general linear least-squares problem using our new symbolic array constructs. First we define a symbolic n×m matrix A, and symbolic vectors b and x:

\n
\n
\n

\n

Our goal is to find a vector that minimizes . And with our definitions we can now immediately write down this quantity:

\n
\n
\n

\n

To extremize it, we compute its derivative

\n
\n
\n

\n

and to ensure we get a minimum, we compute the second derivative:

\n
\n
\n

\n

These are standard textbook formulas, but the cool thing is that in Version 14.1 we’re now in a position to generate them completely automatically. By the way, if we take another derivative, the result will be a zero tensor:

\n
\n
\n

\n

We can look at other norms too:

\n
\n
\n

\n
\n
\n

\n

Binomials and Pitchforks: Navigating Mathematical Conventions

\n

Binomial coefficients have been around for at least a thousand years, and one might not have thought there could possibly be anything shocking or controversial about them anymore (notwithstanding the fictional Treatise on the Binomial Theorem by Sherlock Holmes’s nemesis Professor Moriarty). But in fact we have recently been mired in an intense debate about binomial coefficients—which has caused us in Version 14.1 to introduce a new function PascalBinomial alongside our existing Binomial.

\n

When one’s dealing with positive integer arguments, there’s no issue with binomials. And even when one extends to generic complex arguments, there’s again a unique way to do this. But negative integer arguments are a special degenerate case. And that’s where there’s trouble—because there are two different definitions that have historically been used.

\n

In early versions of Mathematica, we picked one of these definitions. But over time we realized that it led to some subtle inconsistencies, and so for Version 7—in 2008—we changed to the other definition. Some of our users were happy with the change, but some were definitely not. A notable (vociferous) example was my friend Don Knuth, who has written several well-known books that make use of binomial coefficients—always choosing what amounts to our pre-2008 definition.

\n

So what could we do about this? For a while we thought about adding an option to Binomial, but to do this would have broken our normal conventions for mathematical functions. And somehow we kept on thinking that there was ultimately a “right answer” to how binomial coefficients should be defined. But after a lot of discussion—and historical research—we finally concluded that since at least before 1950 there have just been two possible definitions, each with their own advantages and disadvantages, with no obvious “winner”. And so in Version 14.1 we decided just to introduce a new function PascalBinomial to cover the “other definition”.

\n

And—though at first it might not seem like much—here’s a big difference between Binomial and PascalBinomial:

\n
\n
\n

\n
\n
\n

\n

Part of why things get complicated is the relation to symbolic computation. Binomial has a symbolic simplification rule, valid for any n:

\n
\n
\n

\n

But there isn’t a corresponding generic simplification rule for PascalBinomial:

\n
\n
\n

\n

FunctionExpand shows us the more nuanced result in this case:

\n
\n
\n

\n

To see a bit more of what’s going on, we can compute arrays of nonzero results for Binomial and PascalBinomial:

\n
\n
\n

\n

Binomial[n, k] has the “nice feature” that it’s symmetric in k even when n < 0. But this has the “bad consequence” that Pascal’s identity (that says a particular binomial coefficient is the sum of two coefficients “above it”) isn’t always true. PascalBinomial, on the other hand, always satisfies the identity, and it’s in recognition of this that we put “Pascal” in its name.

\n

And, yes, this is all quite subtle. And, remember, the differences between Binomial and PascalBinomial only show up at negative integer values. Away from such values, they’re both given by the same expression, involving gamma functions. But at negative integer values, they correspond to different limits, respectively:

\n
\n
\n

\n
\n
\n

\n

The story of Binomial and PascalBinomial is a complicated one that mainly affects only the upper reaches of discrete mathematics. But there’s another, much more elementary convention that we’ve also tackled in Version 14.1: the convention of what the arguments of trigonometric functions mean.

\n

We’ve always taken the “fundamentally mathematical” point of view that the x in Sin[x] is in radians:

\n
\n
\n

\n

You’ve always been able to explicitly give the argument in degrees (using Degree—or after Version 3 in 1996—using °):

\n
\n
\n

\n

But a different convention would just say that the argument to Sin should always be interpreted as being in degrees, even if it’s just a plain number. Calculators would often have a physical switch that globally toggles to this convention. And while that might be OK if you are just doing a small calculation and can physically see the switch, nothing like that would make any sense at all in our system. But still, particularly in elementary mathematics, one might want a “degrees version” of trigonometric functions. And in Version 14.1 we’ve introduced these:

\n
\n
\n

\n
\n
\n

\n

One might think this was somewhat trivial. But what’s nontrivial is that the “degrees trigonometric functions” are consistently integrated throughout the system. Here, for example, is the period in SinDegrees:

\n
\n
\n

\n

You can take the integral as well

\n
\n
\n

\n

and the messiness of this form shows why for more than three decades we’ve just dealt with Sin[x] and radians.

\n

Fixed Points and Stability for Differential and Difference Equations

\n

All sorts of differential equations have the feature that their solutions exhibit fixed points. It’s always in principle been possible to find these by looking for points where derivatives vanish. But in Version 14.1 we now have a general, robust function that takes the same form of input as DSolve and finds all fixed points:

\n
\n
\n

\n

Here’s a stream plot of the solutions to our equations, together with the fixed points we’ve found:

\n
\n
\n

\n

And we can see that there are two different kinds of fixed points here. The ones on the left and right are “stable” in the sense that solutions that start near them always stay near them. But it’s a different story for the fixed points at the top and bottom; for these, solutions that start nearby can diverge. The function DStabilityConditions computes fixed points, and specifies whether they are stable or not:

\n
\n
\n

\n

As another example, here are the Lorenz equations, which have one unstable fixed point, and two stable ones:

\n
\n
\n

\n

If your equations have parameters, their stability fixed points can depend on those parameters:

\n
\n
\n

\n

Extracting the conditions here, we can now plot the region of parameter space where this fixed point is stable:

\n
\n
\n

\n

This kind of stability analysis is important in all sorts of fields, including dynamical systems theory, control theory, celestial mechanics and computational ecology.

\n

And just as one can find fixed points and do stability analysis for differential equations, one can also do it for difference equations—and this is important for discrete dynamical systems, digital control systems, and for iterative numerical algorithms. Here’s a classic example in Version 14.1 for the logistic map:

\n
\n
\n

\n

The Steady Advance of PDEs

\n

Five years ago—in Version 11.3—we introduced our framework for symbolically representing physical systems using PDEs. And in every version since we’ve been steadily adding more and more capabilities. At this point we’ve now covered the basics of heat transfer, mass transport, acoustics, solid mechanics, fluid mechanics, electromagnetics and (one-particle) quantum mechanics. And with our underlying symbolic framework, it’s easy to mix components of all these different kinds.

\n

Our goal now is to progressively cover what’s needed for more and more kinds of applications. So in Version 14.1 we’re adding von Mises stress analysis for solid mechanics, electric current density models for electromagnetics and anisotropic effective masses for quantum mechanics.

\n

So as an example of what’s now possible, here’s a piece of geometry representing a spiral inductor of the kind that might be used in a modern MEMS device:

\n
\n
\n

\n

Let’s define our variables—voltage and position:

\n
\n
\n

\n

And let’s specify parameters—here just that the material we’re going to deal with is copper:

\n
\n
\n

\n

Now we’re in a position to set up the PDE for this system, making use of the new constructs ElectricCurrentPDEComponent and ElectricCurrentDensityValue:

\n
\n
\n

\n

All it takes to solve this PDE for the voltage is then:

\n
\n
\n

\n

From the voltage we can compute the current density

\n
\n
\n

\n

and then plot it (and, yes, the current tends to avoid the corners):

\n
\n
\n

\n

Symbolic Biomolecules and Their Visualization

\n

Ever since Version 12.2 we’ve had the ability to represent and manipulate bio sequences of the kind that appear in DNA, RNA and proteins. We’ve also been able to do things like import PDB (Protein Data Bank) files and generate graphics from them. But now in Version 14.1 we’re adding a symbolic BioMolecule construct, to represent the full structure of biomolecules:

\n
\n
\n

\n

Ultimately this is “just a molecule” (and in this case its data is so big it’s not by default stored locally in your notebook):

\n
\n
\n

\n

But what BioMolecule does is also to capture the “higher-order structure” of the molecule, for example how it’s built up from distinct chains, where structures like α-helices occur in these, and so on. For example, here are the two (bio sequence) chains that appear in this case:

\n
\n
\n

\n

And here are where the α-helices occur:

\n
\n
\n

\n

What about visualization? Well, there’s BioMoleculePlot3D for that:

\n
\n
\n

\n

There are different “themes” you can use for this:

\n
\n
\n

\n

Here’s a raw-atom-level view:

\n
\n
\n

\n

You can combine the views—and for example add coordinate values (specified in angstroms):

\n
\n
\n

\n

You can also specify “color rules” that define how particular parts of the biomolecule should be rendered:

\n
\n
\n

\n

But the structure here isn’t just something you can make graphics out of; it’s also something you can compute with. For example, here’s a geometric region formed from the biomolecule:

\n
\n
\n

\n

And this computes its surface area (in square angstroms):

\n
\n
\n

\n

The Wolfram Language has built-in data on a certain number of proteins. But you can get data on many more proteins from external sources—specifying them with external identifiers:

\n
\n
\n

\n
\n
\n

\n

When you get a protein—say from an external source—it’ll often come with a 3D structure specified, for example as deduced from experimental measurements. But even without that, Version 14.1 will attempt to find at least an approximate structure—by using machine-learning-based protein-folding methods. As an example, here’s a random bio sequence:

\n
\n
\n

\n

If you make a BioMolecule out of this, a “predicted” 3D structure will be generated:

\n
\n
\n

\n

Here’s a visualization of this structure—though more work would be needed to determine how it’s related to what one might actually observe experimentally:

\n
\n
\n

\n

Optimizing Neural Nets for GPUs and NPUs

\n

Many computers now come with GPU and NPU hardware accelerators for machine learning, and in Version 14.1 we’ve added more support for these. Specifically, on macOS (Apple Silicon) and Windows machines, built-in functions like ImageIdentify and SpeechRecognize now automatically use CoreML (Neural Engine) and DirectML capabilities—and the result is typically 2X to 10X faster performance.

\n

We’ve always supported explicit CUDA GPU acceleration, for both training and inference. But in Version 14.1 we now support CoreML and DirectML acceleration for inference tasks with explicitly specified neural nets. But whereas this acceleration is now the default for built-in machine-learning-based functions, for explicitly specified models the default isn’t yet the default.

\n

So, for example, this doesn’t use GPU acceleration:

\n
\n
\n

\n

But you can explicitly request it—and then (assuming all features of the model can be accelerated) things will typically run significantly faster:

\n
\n
\n

\n

We’re continually sprucing up our infrastructure for machine learning. And as part of that, in Version 14.1 we’ve enhanced our diagrams for neural nets to make layers more visually distinct—and to immediately produce diagrams suitable for publication:

\n
\n
\n

\n

The Statistics of Dates

\n

We’ve been releasing versions of what’s now the Wolfram Language for 36 years. And looking at that whole collection of release dates, we can ask statistical questions. Like “What’s the median date for all the releases so far?” Well, in Version 14.1 there’s a direct way to answer that—because statistical functions like Median now just immediately work on dates:

\n
\n
\n

\n

What if we ask about all 7000 or so functions in the Wolfram Language? Here’s a histogram of when they were introduced:

\n
\n
\n

\n

And now we can compute the median, showing quantitatively that, yes, Wolfram Language development has speeded up:

\n
\n
\n

\n

Dates are a bit like numbers, but not quite. For example, their “zero” shifts around depending on the calendar. And their granularity is more complicated than precision for numbers. In addition, a single date can have multiple different representations (say in different calendars or with different granularities). But it nevertheless turns out to be possible to define many kinds of statistics for dates. To understand these statistics—and to compute them—it’s typically convenient to make one’s whole collection of dates have the same form. And in Version 14.1 this can be achieved with the new function ConformDates (which here converts all dates to the format of the first one listed):

\n
\n
\n

\n

By the way, in Version 14.1 the whole pipeline for handling dates (and times) has been dramatically speeded up, most notably conversion from strings, as needed in the import of dates.

\n

The concept of doing statistics on dates introduces another new idea: date (and time) distributions. And in Version 14.1 there are two new functions DateDistribution and TimeDistribution for defining such distributions. Unlike for numerical (or quantity) distributions, date and time distributions require the specification of an origin, like Today, as well as of a scale, like \"Days\":

\n
\n
\n

\n

But given this symbolic specification, we can now do operations just like for any other distribution, say generating some random variates:

\n
\n
\n

\n

Building Videos with Programs

\n

Introduced in Version 6 back in 2007, Manipulate provides an immediate way to create an interactive “manipulable” interface. And it’s been possible for a long time to export Manipulate objects to video. But just what should happen in the video? What sliders should move in what way? In Version 12.3 we introduced AnimationVideo to let you make a video in which one parameter is changing with time. But now in Version 14.1 we have ManipulateVideo which lets you create a video in which many parameters can be varied simultaneously. One way to specify what you want is to say for each parameter what value it should get at a sequence of times (by default measured in seconds from the beginning of the video). ManipulateVideo then produces a smooth video by interpolating between these values:

\n
\n
\n
\n

(An alternative is to specify complete “keyframes” by giving operations to be done at particular times.)

\n

ManipulateVideo in a sense provides a “holistic” way to create a video by controlling a Manipulate. And in the last several versions we’ve introduced many functions for creating videos from “existing structures” (for example FrameListVideo assembles a video from a list of frames). But sometimes you want to build up videos one frame at a time. And in Version 14.1 we’ve introduced SowVideo and ReapVideo for doing this. They’re basically the analog of Sow and Reap for video frames. SowVideo will “sow” one or more frames, and all frames you sow will then be collected and assembled into a video by ReapVideo:

\n
\n
\n
\n

One common application of SowVideo/ReapVideo is to assemble a video from frames that are programmatically picked out by some criterion from some other video. So, for example, this “sows” frames that contain a bird, then “reaps” them to assemble a new video.

\n
\n
\n
\n

Another way to programmatically create one video from another is to build up a new video by progressively “folding in” frames from an existing video—which is what the new function VideoFrameFold does:

\n
\n
\n
\n

Version 14.1 also has a variety of new “convenience functions” for dealing with videos. One example is VideoSummaryPlot which generates various “at-a-glance” summaries of videos (and their audio):

\n
\n
\n

\n
\n
\n

\n

Another new feature in Version 14.1 is the ability to apply audio processing functions directly to videos:

\n
\n
\n

\n

And, yes, it’s a bird:

\n
\n
\n

\n

Optimizing the Speech Recognition Workflow

\n

We first introduced SpeechRecognize in 2019 in Version 12.0. And now in Version 14.1 SpeechRecognize is getting a makeover.

\n

The most dramatic change is speed. In the past, SpeechRecognize would typically take at least as long to recognize a piece of speech as the duration of the speech itself. But now in Version 14.1, SpeechRecognize runs many tens of times faster, so you can recognize speech much faster than real time.

\n

And what’s more, SpeechRecognize now produces full, written text, complete with capitalization, punctuation, etc. So here, for example, is a transcription of a little video:

\n
\n
\n

\n

There’s also a new function, VideoTranscribe, that will take a video, transcribe its audio, and insert the transcription back into the subtitle track of the video.

\n

And, by the way, SpeechRecognize runs entirely locally on your computer, without having to access a server (except maybe for updates to the neural net it’s using).

\n

In the past SpeechRecognize could only handle English. In Version 14.1 it can handle 100 languages—and can automatically produce translated transcriptions. (By default it produces transcriptions in the language you’ve specified with $Language.) And if you want to identify what language a piece of audio is in, LanguageIdentify now works directly on audio.

\n

SpeechRecognize by default produces a single string of text. But it now also has the option to break up its results into a list, say of sentences:

\n
\n
\n

\n

And in addition to producing a transcription, SpeechRecognize can give time intervals or audio fragments for each element:

\n
\n
\n

\n

Historical Geography Becomes Computable

\n

History is complicated. But that doesn’t mean there isn’t much that can be made computable about it. And in Version 14.1 we’re taking a major step forward in making historical geography computable. We’ve had extensive geographic computation capabilities in the Wolfram Language for well over a decade. And in Version 14.1 we’re extending that to historical geography.

\n

So now you can not only ask for a map of where the current country of Italy is, you can also ask to make a map of the Roman Empire in 100 AD:

\n
\n
\n

\n

And “the Roman Empire in 100 AD” is now a computable entity. So you can ask for example what its approximate area was:

\n
\n
\n

\n

And you can even make a plot of how the area of the Roman Empire changed over the period from 0 AD to 200 AD:

\n
\n
\n

\n

We’ve been building our knowledgebase of historical geography for many years. Of course, country borders may be disputed, and—particularly in the more distant past—may not have been well defined. But by now we’ve accumulated computable data on basically all of the few thousand known historical countries. Still—with history being complicated—it’s not surprising that there are all sorts of often subtle issues.

\n

Let’s start by asking what historical countries the location that’s now Mexico City has been in. GeoIdentify gives the answer:

\n
\n
\n

\n

And already we see subtlety. For example, our historical country entities are labeled by their overall beginning and ending dates. But most of them covered Mexico City only for part of their existence. And here we can see what’s going on:

\n
\n
\n

\n

Often there’s subtlety in identifying what should count as a “different country”. If there was just an “acquisition” or a small “change of management” maybe it’s still the same country. But if there was a “dramatic reorganization”, maybe it’s a different country. Sometimes the names of countries (if they even had official names) give clues. But in general it’s taken lots of case-by-case curation, trying to follow the typical conventions used by historians of particular times and places.

\n

For London we see several “close-but-we-consider-it-a-different-country” issues—along with various confusing repeated conquerings and reconquerings:

\n
\n
\n

\n

Here’s a timeline plot of the countries that have contained London:

\n
\n
\n

\n

And because everything is computable, it’s easy to identify the longest contiguous segment here:

\n
\n
\n

\n

GeoIdentify can tell us what entities something like a city is inside. GeoEntities, on the other hand, can tell us what entities are inside something like a country. So, for example, this tells us what historical countries were inside (or at least overlapped with) the current boundaries of the UK in 800 AD:

\n
\n
\n

\n

This then makes a map (the extra list makes these countries be rendered separately):

\n
\n
\n

\n

In the Wolfram Language we have data on quite a few kinds of historical entities beyond countries. For example, we have extensive data on military conflicts. Here we’re asking what military conflicts occurred within the borders of what’s now France between 200 BC and 200 AD:

\n
\n
\n

\n

Here’s a map of their locations:

\n
\n
\n

\n

And here are conflicts in the Atlantic Ocean in the period 1939–1945:

\n
\n
\n

\n

And—combining several things—here’s a map of conflicts that, at the time when they occurred, were within the region of what was then Carthage:

\n
\n
\n

\n

There are all sorts of things that we can compute from historical geography. For example, this asks for the (minimum) geo distance between the territory of the Roman Empire and the Han Dynasty in 100 AD:

\n
\n
\n

\n

But what about the overall minimum distance across all years when these historical countries existed? This gives the result for that:

\n
\n
\n

\n

Let’s compare this with a plot of these two entities:

\n
\n
\n

\n

But there’s a subtlety here. What version of the Roman Empire is it that we’re showing on the map here? Our convention is by default to show historical countries “at their zenith”, i.e. at the moment when they had their maximum extent.

\n

But what about other choices? Dated gives us a way to specify a particular date. But another possibility is to include in what we consider to be a particular historical country any territory that was ever part of that country, at any time in its history. And you can do this using GeoVariant[…, \"UnionArea\"]. In the particular case we’re showing here, it doesn’t make much difference, except that there’s more territory in Germany and Scotland included in the Roman Empire:

\n
\n
\n

\n

By the way, you can combine Dated and GeoVariant, to get things like “the zenith within a certain period” or “any territory that was included at any time within a period”. And, yes, it can get quite complicated. In a rather physics-like way you can think of the extent of a historical country as defining a region in spacetime—and indeed GeoVariant[…, \"TimeSeries\"] in effect represents a whole “stack of spacelike slices” in this spacetime region:

\n
\n
\n

\n

And—though it takes a little while—you can use it to make a video of the rise and fall of the Roman Empire:

\n
\n
\n
\n

Astronomical Graphics and Their Axes

\n

It’s complicated to define where things are in the sky. There are four main coordinate systems that get used in doing this: horizon (relative to local horizon), equatorial (relative to the Earth’s equator), ecliptic (relative to the orbit of the Earth around the Sun) and galactic (relative to the plane of the galaxy). And when we draw a diagram of the sky (here on white for clarity) it’s typical to show the “axes” for all these coordinate systems:

\n
\n
\n

\n

But here’s a tricky thing: how should those axes be labeled? Each one is different: horizon is most naturally labeled by things like cardinal directions (N, E, S, W, etc.), equatorial by hours in the day (in sidereal time), ecliptic by months in the year, and galactic by angle from the center of the galaxy.

\n

In ordinary plots axes are usually straight, and labeled uniformly (or perhaps, say, logarithmically). But in astronomy things are much more complicated: the axes are intrinsically circular, and then get rendered through whatever projection we’re using.

\n

And we might have thought that such axes would require some kind of custom structure. But not in the Wolfram Language. Because in the Wolfram Language we try to make things general. And axes are no exception:

\n
\n
\n

\n

So in AstroGraphics all our various axes are just AxisObject constructs—that can be computed with. And so, for example, here’s a Mollweide projection of the sky:

\n
\n
\n

\n

If we insist on “seeing the whole sky”, the bottom half is just the Earth (and, yes, the Sun isn’t shown because I’m writing this after it’s set for the day…):

\n
\n
\n

\n

Things get a bit wild if we start adding grid lines, here for galactic coordinates:

\n
\n
\n

\n

And, yes, the galactic coordinate axis is indeed aligned with the plane of the Milky Way (i.e. our galaxy):

\n
\n
\n

\n

When Is Earthrise on Mars? New Level of Astronomical Computation

\n

When will the Earth next rise above the horizon from where the Perseverance rover is on Mars? In Version 14.1 we can now compute this (and, yes, this is an “Earth time” converted from Mars time using the standard barycentric celestial reference system (BCRS) solar-system-wide spacetime coordinate system):

\n
\n
\n

\n

This is a fairly complicated computation that takes into account not only the motion and rotation of the bodies involved, but also various other physical effects. A more “down to Earth” example that one might readily check by looking out of one’s window is to compute the rise and set times of the Moon from a particular point on the Earth:

\n
\n
\n

\n

There’s a slight variation in the times between moonrises:

\n
\n
\n

\n

Over the course of a year we see systematic variations associated with the periods of different kinds of lunar months:

\n
\n
\n

\n

There are all sorts of subtleties here. For example, when exactly does one define something (like the Sun) to have “risen”? Is it when the top of the Sun first peeks out? When the center appears? Or when the “whole Sun” is visible? In Version 14.1 you can ask about any of these:

\n
\n
\n

\n

Oh, and you could compute the same thing for the rise of Venus, but now to see the differences, you’ve got to go to millisecond granularity (and, by the way, granularities of milliseconds down to picoseconds are new in Version 14.1):

\n
\n
\n

\n

By the way, particularly for the Sun, the concept of ReferenceAltitude is useful in specifying the various kinds of sunrise and sunset: for example, “civil twilight” corresponds to a reference altitude of –6°.

\n

Geometry Goes Color, and Polar

\n

Last year we introduced the function ARPublish to provide a streamlined way to take 3D geometry and publish it for viewing in augmented reality. In Version 14.1 we’ve now extended this pipeline to deal with color:

\n
\n
\n

\n

(Yes, the color is a little different on the phone because the phone tries to make it look “more natural”.)

\n

Augmented reality via QR code

\n

And now it’s easy to view this not just on a phone, but also, for example, on the Apple Vision Pro:

\n

\n

Graphics have always had color. But now in Version 14.1 symbolic geometric regions can have color too:

\n
\n
\n

\n

And constructive geometric operations on regions preserve color:

\n
\n
\n

\n

Two other new functions in Version 14.1 are PolarCurve and FilledPolarCurve:

\n
\n
\n

\n
\n
\n

\n

And while at this level this may look simple, what’s going on underneath is actually seriously complicated, with all sorts of symbolic analysis needed in order to determine what the “inside” of the parametric curve should be.

\n

Talking about geometry and color brings up another enhancement in Version 14.1: plot themes for diagrams in synthetic geometry. Back in Version 12.0 we introduced symbolic synthetic geometry—in effect finally providing a streamlined computable way to do the kind of geometry that Euclid did two millennia ago. In the past few versions we’ve been steadily expanding our synthetic geometry capabilities, and now in Version 14.1 one notable thing we’ve added is the ability to use plot themes—and explicit graphics options—to style geometric diagrams. Here’s the default version of a geometric diagram:

\n
\n
\n

\n

Now we can “theme” this for the web:

\n
\n
\n

\n

New Computation Flow in Notebooks: Introducing Cell-Linked %

\n

In building up computations in notebooks, one very often finds oneself wanting to take a result one just got and then do something with it. And ever since Version 1.0 one’s been able to do this by referring to the result one just got as %. It’s very convenient. But there are some subtle and sometimes frustrating issues with it, the most important of which has to do with what happens when one reevaluates an input that contains %.

\n

Let’s say you’ve done this:

\n

Range

\n

But now you decide that actually you wanted Median[ % ^ 2 ] instead. So you edit that input and reevaluate it:

\n

Edit and reevaluate

\n

Oops! Even though what’s right above your input in the notebook is a list, the value of % is the latest result that was computed, which you can’t now see, but which was 3.

\n

OK, so what can one do about this? We’ve thought about it for a long time (and by “long” I mean decades). And finally now in Version 14.1 we have a solution—that I think is very nice and very convenient. The core of it is a new notebook-oriented analog of %, that lets one refer not just to things like “the last result that was computed” but instead to things like “the result computed in a particular cell in the notebook”.

\n

So let’s look at our sequence from above again. Let’s start typing another cell—say to “try to get it right”. In Version 14.1 as soon as we type % we see an autosuggest menu:

\n

Autosuggest menu

\n

The menu is giving us a choice of (output) cells that we might want to refer to. Let’s pick the last one listed:

\n

Last menu option

\n

The object is a reference to the output from the cell that’s currently labeled In[1]—and using now gives us what we wanted.

\n

But let’s say we go back and change the first (input) cell in the notebook—and reevaluate it:

\n

Reevaluate Range

\n

The cell now gets labeled In[5]—and the (in In[4]) that refers to that cell will immediately change to :

\n

Median

\n

And if we now evaluate this cell, it’ll pick up the value of the output associated with In[5], and give us a new answer:

\n

New answer

\n

So what’s really going on here? The key idea is that signifies a new type of notebook element that’s a kind of cell-linked analog of %. It represents the latest result from evaluating a particular cell, wherever the cell may be, and whatever the cell may be labeled. (The object always shows the current label of the cell it’s linked to.) In effect is “notebook front end oriented”, while ordinary % is kernel oriented. is linked to the contents of a particular cell in a notebook; % refers to the state of the Wolfram Language kernel at a certain time.

\n

gets updated whenever the cell it’s referring to is reevaluated. So its value can change either through the cell being explicitly edited (as in the example above) or because reevaluation gives a different value, say because it involves generating a random number:

\n

RandomInteger

\n

OK, so always refers to “a particular cell”. But what makes a cell a particular cell? It’s defined by a unique ID that’s assigned to every cell. When a new cell is created it’s given a universally unique ID, and it carries that same ID wherever it’s placed and whatever its contents may be (and even across different sessions). If the cell is copied, then the copy gets a new ID. And although you won’t explicitly see cell IDs, works by linking to a cell with a particular ID.

\n

One can think of as providing a “more stable” way to refer to outputs in a notebook. And actually, that’s true not just within a single session, but also across sessions. Say one saves the notebook above and opens it in a new session. Here’s what you’ll see:

\n

Saving across sessions

\n

The is now grayed out. So what happens if we try to reevaluate it? Well, we get this:

\n

Reconstruct or reevaluate

\n

If we press Reconstruct from output cell the system will take the contents of the first output cell that was saved in the notebook, and use this to get input for the cell we’re evaluating:

\n

Reconstruct from output cell

\n

In almost all cases the contents of the output cell will be sufficient to allow the expression “behind it” to be reconstructed. But in some cases—like when the original output was too big, and so was elided—there won’t be enough in the output cell to do the reconstruction. And in such cases it’s time to take the Go to input cell branch, which in this case will just take us back to the first cell in the notebook, and let us reevaluate it to recompute the output expression it gives.

\n

By the way, whenever you see a “positional %” you can hover over it to highlight the cell it’s referring to:

\n

Positional % highlighting

\n

Having talked a bit about “cell-linked %” it’s worth pointing out that there are still cases when you’ll want to use “ordinary %”. A typical example is if you have an input line that you’re using a bit like a function (say for post-processing) and that you want to repeatedly reevaluate to see what it produces when applied to your latest output.

\n

In a sense, ordinary % is the “most volatile” in what it refers to. Cell-linked % is “less volatile”. But sometimes you want no volatility at all in what you’re referring to; you basically just want to burn a particular expression into your notebook. And in fact the % autosuggest menu gives you a way to do just that.

\n

Notice the that appears in whatever row of the menu you’re selecting:

\n

Iconize option

\n

Press this and you’ll insert (in iconized form) the whole expression that’s being referred to:

\n

Iconized expression

\n

Now—for better or worse—whatever changes you make in the notebook won’t affect the expression, because it’s right there, in literal form, “inside” the icon. And yes, you can explicitly “uniconize” to get back the original expression:

\n

Uniconize

\n

Once you have a cell-linked % it always has a contextual menu with various actions:

\n

Contextual menu

\n

One of those actions is to do what we just mentioned, and replace the positional by an iconized version of the expression it’s currently referring to. You can also highlight the output and input cells that the is “linked to”. (Incidentally, another way to replace a by the expression it’s referring to is simply to “evaluate in place” , which you can do by selecting it and pressing CMDReturn or ShiftControlEnter.)

\n

Another item in the menu is Replace With Rolled-Up Inputs. What this does is—as it says—to “roll up” a sequence of “ references” and create a single expression from them:

\n

Replace with rolled-up inputs

\n

What we’ve talked about so far one can think of as being “normal and customary” uses of . But there are all sorts of corner cases that can show up. For example, what happens if you have a that refers to a cell you delete? Well, within a single (kernel) session that’s OK, because the expression “behind” the cell is still available in the kernel (unless you reset your $HistoryLength etc.). Still, the will show up with a “red broken link” to indicate that “there could be trouble”:

\n

Red broken link

\n

And indeed if you go to a different (kernel) session there will be trouble—because the information you need to get the expression to which the refers is simply no longer available, so it has no choice but to show up in a kind of everything-has-fallen-apart “surrender state” as:

\n

Surrender state

\n

is primarily useful when it refers to cells in the notebook you’re currently using (and indeed the autosuggest menu will contain only cells from your current notebook). But what if it ends up referring to a cell in a different notebook, say because you copied the cell from one notebook to another? It’s a precarious situation. But if all relevant notebooks are open, can still work, though it’s displayed in purple with an action-at-a-distance “wi-fi icon” to indicate its precariousness:

\n

Wi-fi icon

\n

And if, for example, you start a new session, and the notebook containing the “source” of the isn’t open, then you’ll get the “surrender state”. (If you open the necessary notebook it’ll “unsurrender” again.)

\n

Yes, there are lots of tricky cases to cover (in fact, many more than we’ve explicitly discussed here). And indeed seeing all these cases makes us not feel bad about how long it’s taken for us to conceptualize and implement .

\n

The most common way to access is to use the % autosuggest menu. But if you know you want a , you can always get it by “pure typing”, using for example ESC%ESC. (And, yes, ESC%%ESC or ESC%5ESC etc. also work, so long as the necessary cells are present in your notebook.)

\n

The UX Journey Continues: New Typing Affordances, and More

\n

We invented Wolfram Notebooks more than 36 years ago, and we’ve been improving and polishing them ever since. And in Version 14.1 we’re implementing several new ideas, particularly around making it even easier to type Wolfram Language code.

\n

It’s worth saying at the outset that good UX ideas quickly become essentially invisible. They just give you hints about how to interpret something or what to do with it. And if they’re doing their job well, you’ll barely notice them, and everything will just seem “obvious”.

\n

So what’s new in UX for Version 14.1? First, there’s a story around brackets. We first introduced syntax coloring for unmatched brackets back in the late 1990s, and gradually polished it over the following two decades. Then in 2021 we started “automatching” brackets (and other delimiters), so that as soon as you type “f[” you immediately get f[ ].

\n

But how do you keep on typing? You could use an to “move through” the ]. But we’ve set it up so you can just “type through” ] by typing ]. In one of those typical pieces of UX subtlety, however, “type through” doesn’t always make sense. For example, let’s say you typed f[x]. Now you click right after [ and you type g[, so you’ve got f[g[x]. You might think there should be an autotyped ] to go along with the [ after g. But where should it go? Maybe you want to get f[g[x]], or maybe you’re really trying to type f[g[],x]. We definitely don’t want to autotype ] in the wrong place. So the best we can do is not autotype anything at all, and just let you type the ] yourself, where you want it. But remember that with f[x] on its own, the ] is autotyped, and so if you type ] yourself in this case, it’ll just type through the autotyped ] and you won’t explicitly see it.

\n

So how can you tell whether a ] you type will explicitly show up, or will just be “absorbed” as type-through? In Version 14.1 there’s now different syntax coloring for these cases: yellow if it’ll be “absorbed”, and pink if it’ll explicitly show up.

\n

This is an example of non-type-through, so Range is colored yellow and the ] you type is “absorbed”:

\n

Range highlighted yellow

\n

And this is an example of non-type-through, so Round is colored pink and the ] you type is explicitly inserted:

\n

Round highlighted pink

\n

This may all sound very fiddly and detailed—and for us in developing it, it is. But the point is that you don’t explicitly have to think about it. You quickly learn to just “take the hint” from the syntax coloring about when your closing delimiters will be “absorbed” and when they won’t. And the result is that you’ll have an even smoother and faster typing experience, with even less chance of unmatched (or incorrectly matched) delimiters.

\n

The new syntax coloring we just discussed helps in typing code. In Version 14.1 there’s also something new that helps in reading code. It’s an enhanced version of something that’s actually common in IDEs: when you click (or select) a variable, every instance of that variable immediately gets highlighted:

\n

Highlighted variable

\n

What’s subtle in our case is that we take account of the scoping of localized variables—putting a more colorful highlight on instances of a variable that are in scope:

\n

Multiple instances of a variable

\n

One place this tends to be particularly useful is in understanding nested pure functions that use #. By clicking a # you can see which other instances of # are in the same pure function, and which are in different ones (the highlight is bluer inside the same function, and grayer outside):

\n

Highlighting in nested functions

\n

On the subject of finding variables, another change in Version 14.1 is that fuzzy name autocompletion now also works for contexts. So if you have a symbol whose full name is context1`subx`var2 you can type c1x and you’ll get a completion for the context; then accept this and you get a completion for the symbol.

\n
\n
\n

\n

There are also several other notable UX “tune-ups” in Version 14.1. For many years, there’s been an “information box” that comes up whenever you hover over a symbol. Now that’s been extended to entities—so (alongside their explicit form) you can immediately get to information about them and their properties:

\n

Entity information box

\n

Next there’s something that, yes, I personally have found frustrating in the past. Say you’ve a file, or an image, or something else somewhere on your computer’s desktop. Normally if you want it in a Wolfram Notebook you can just drag it there, and it will very beautifully appear. But what if the thing you’re dragging is very big, or has some other kind of issue? In the past, the drag just failed. Now what happens is that you get the explicit Import that the dragging would have done, so that you can run it yourself (getting progress information, etc.), or you can modify it, say adding relevant options.

\n

Another small piece of polish that’s been added in Version 14.1 has to do with Preferences. There are a lot of things you can set in the notebook front end. And they’re explained, at least briefly, in the many Preferences panels. But in Version 14.1 there are now (i) buttons that give direct links to the relevant workflow documentation:

\n

Direct link to workflow documentation

\n

Syntax for Natural Language Input

\n

Ever since shortly after Wolfram|Alpha was released in 2009, there’ve been ways to access its natural language understanding capabilities in the Wolfram Language. Foremost among these has been CTRL=—which lets you type free-form natural language and immediately get a Wolfram Language version, often in terms of entities, etc.:

\n

Wolfram|Alpha entities

\n

Generally this is a very convenient and elegant capability. But sometimes one may want to just use plain text to specify natural language input, for example so that one doesn’t interrupt one’s textual typing of input.

\n

In Version 14.1 there’s a new mechanism for this: syntax for directly entering free-form natural language input. The syntax is a kind of a “textified” version of CTRL=: =[…]. When you type =[...] as input nothing immediately happens. It’s only when you evaluate your input that the natural language gets interpreted—and then whatever it specifies is computed.

\n

Here’s a very simple example, where each =[…] just turns into an entity:

\n
\n
\n

\n

But when the result of interpreting the natural language is an expression that can be further evaluated, what will come out is the result of that evaluation:

\n
\n
\n

\n

One feature of using =[…] instead of CTRL= is that =[…] is something anyone can immediately see how to type:

\n
\n
\n

\n

But what actually is =[…]? Well, it’s just input syntax for the new function FreeformEvaluate:

\n
\n
\n

\n

You can use FreeformEvaluate inside a program—here, rather whimsically, to see what interpretations are chosen by default for “a” followed by each letter of the alphabet:

\n
\n
\n

\n

By default, FreeformEvaluate interprets your input, then evaluates it. But you can also specify that you want to hold the result of the interpretation:

\n
\n
\n

\n

Diff[ ] … for Notebooks and More!

\n

It’s been a very long-requested capability: give me a way to tell what changed, particularly in a notebook. It’s fairly easy to do “diffs” for plain text. But for notebooks—as structured symbolic documents—it’s a much more complicated story. But in Version 14.1 it’s here! We’ve got a function Diff for doing diffs in notebooks, and actually also in many other kinds of things.

\n

Here’s an example, where we’re requesting a “side-by-side view” of the diff between two notebooks:

\n
\n
\n

\n

And here’s an “alignment chart view” of the diff:

\n
\n
Click to enlarge
\n

\n

Like everything else in the Wolfram Language, a “diff” is a symbolic expression. Here’s an example:

\n
\n
\n

\n

There are lots of different ways to display a diff object; many of them one can select interactively with the menu:

\n

Diff object viewing options

\n

But the most important thing about diff objects is that they can be used programmatically. And in particular DiffApply applies the diffs from a diff object to an existing object, say a notebook.

\n

What’s the point of this? Well, let’s imagine you’ve made a notebook, and given a copy of it to someone else. Then both you and the person to whom you’ve given the copy make changes. You can create a diff object of the diffs between the original version of the notebook, and the version with your changes. And if the changes the other person made don’t overlap with yours, you can just take your diffs and use DiffApply to apply your diffs to their version, thereby getting a “merged notebook” with both sets of changes made.

\n

But what if your changes might conflict? Well, then you need to use the function Diff3. Diff3 takes your original notebook and two modified versions, and does a “three-way diff” to give you a diff object in which any conflicts are explicitly identified. (And, yes, three-way diffs are familiar from source control systems in which they provide the back end for making the merging of files as automated as possible.)

\n

Notebooks are an important use case for Diff and related functions. But they’re not the only one. Diff can perfectly well be applied, for example, just to lists:

\n
\n
\n

\n

There are many ways to display this diff object; here’s a side-by-side view:

\n

Side-by-side diff view

\n

And here’s a “unified view” reminiscent of how one might display diffs for lines of text in a file:

\n

Unified diff view

\n

And, speaking of files, Diff, etc. can immediately be applied to files:

\n
\n
\n

\n

Diff, etc. can also be applied to cells, where they can analyze changes in both content and styles or metadata. Here we’re creating two cells and then diffing them—showing the result in a side by side:

\n
\n
\n

\n

In “Combined” view the “pure insertions” are highlighted in green, the “pure deletions” in red, and the “edits” are shown as deletion/insertion stacks:

\n

Combined diff view highlighting

\n

Many uses of diff technology revolve around content development—editing, software engineering, etc. But in the Wolfram Language Diff, etc. are set up also to be convenient for information visualization and for various kinds of algorithmic operations. For example, to see what letters differ between the Spanish and Polish alphabets, we can just use Diff:

\n
\n
\n

\n

Here’s the “pure visualization”:

\n
\n
\n

\n

And here’s an alternate “unified summary” form:

\n
\n
\n

\n

Another use case for Diff is bioinformatics. We retrieve two genome sequences—as strings—then use Diff:

\n
\n
\n

\n

We can take the resulting diff object and show it in a different form—here character alignment:

\n
\n
\n

\n

Under the hood, by the way, Diff is finding the differences using SequenceAlignment. But while Diff is giving a “high-level symbolic diff object”, SequenceAlignment is giving a direct low-level representation of the sequence alignment:

\n
\n
\n

\n

Information visualization isn’t restricted to two-way diffs; here’s an example with a three-way diff:

\n
\n
\n

\n

And here it is as a “unified summary”:

\n
\n
\n

\n

There are all sorts of options for diffs. One that is sometimes important is DiffGranularity. By default the granularity for diffs of strings is \"Characters\":

\n
\n
\n

\n

But it’s also possible to set it to be \"Words\":

\n
\n
\n

\n

Coming back to notebooks, the most “interactive” form of diff is a “report”:

\n
\n
\n

\n

In such a report, you can open cells to see the details of a specific change, and you can also click to jump to where the change occurred in the underlying notebooks.

\n

When it comes to analyzing notebooks, there’s another new feature in Version 14.1: NotebookCellData. NotebookCellData gives you direct programmatic access to lots of properties of notebooks. By default it generates a dataset of some of them, here for the notebook in which I’m currently authoring this:

\n
\n
\n

\n

There are properties like the word count in each cell, the style of each cell, the memory footprint of each cell, and a thumbnail image of each cell.

\n

Ever since Version 6 in 2007 we’ve had the CellChangeTimes option which records when cells in notebooks are created or modified. And now in Version 14.1 NotebookCellData provides direct programmatic access to this data. So, for example, here’s a date histogram of when the cells in the current notebook were last changed:

\n
\n
\n

\n

Lots of Little Language Tune-Ups

\n

It’s part of a journey of almost four decades. Steadily discovering—and inventing—new “lumps of computational work” that make sense to implement as functions or features in the Wolfram Language. The Wolfram Language is of course very much strong enough that one can build essentially any functionality from the primitives that already exist in it. But part of the point of the language is to define the best “elements of computational thought”. And particularly as the language progresses, there’s a continual stream of new opportunities for convenient elements that get exposed. And in Version 14.1 we’ve implemented quite a diverse collection of them.

\n

Let’s say you want to nestedly compose a function. Ever since Version 1.0 there’s been Nest for that:

\n
\n
\n

\n

But what if you want the abstract nested function, not yet applied to anything? Well, in Version 14.1 there’s now an operator form of Nest (and NestList) that represents an abstract nested function that can, for example, be composed with other functions, as in

\n
\n
\n

\n

or equivalently:

\n
\n
\n

\n
\n
\n

\n

A decade ago we introduced functions like AllTrue and AnyTrue that effectively “in one gulp” do a whole collection of separate tests. If one wanted to test whether there are any primes in a list, one can always do:

\n
\n
\n

\n

But it’s better to “package” this “lump of computational work” into the single function AnyTrue:

\n
\n
\n

\n

In Version 14.1 we’re extending this idea by introducing AllMatch, AnyMatch and NoneMatch:

\n
\n
\n

\n

Another somewhat related new function is AllSameBy. SameQ tests whether a collection of expressions are immediately the same. AllSameBy tests whether expressions are the same by the criterion that the value of some function applied to them is the same:

\n
\n
\n

\n

Talking of tests, another new feature in Version 14.1 is a second argument to QuantityQ (and KnownUnitQ), which lets you test not only whether something is a quantity, but also whether it’s a specific type of physical quantity:

\n
\n
\n

\n

And now talking about “rounding things out”, Version 14.1 does that in a very literal way by enhancing the RoundingRadius option. For a start, you can now specify a different rounding radius for particular corners:

\n
\n
\n

\n

And, yes, that’s useful if you’re trying to fit button-like constructs together:

\n
\n
\n

\n

By the way, RoundingRadius now also works for rectangles inside Graphics:

\n
\n
\n

\n

Let’s say you have a string, like “hello”. There are many functions that operate directly on strings. But sometimes you really just want to use a function that operates on lists—and apply it to the characters in a string. Now in Version 14.1 you can do this using StringApply:

\n
\n
\n

\n

Another little convenience in Version 14.1 is the function BitFlip, which, yes, flips a bit in the binary representation of a number:

\n
\n
\n

\n
\n
\n

\n

When it comes to Boolean functions, a detail that’s been improved in Version 14.1 is the conversion to NAND representation. By default, functions like BooleanConvert have allowed Nand[p] (which is equivalent to Not[p]). But in Version 14.1 there’s now \"BinaryNAND\" which yields for example Nand[p, p] instead of just Nand[p] (i.e. Not[p]). So here’s a representation of Or in terms of Nand:

\n
\n
\n

\n

Making the Wolfram Compiler Easier to Use

\n

Let’s say you have a piece of Wolfram Language code that you know you’re going to run a zillion times—so you want it to run absolutely as fast as possible. Well, you’ll want to make sure you’re doing the best algorithmic things you can (and making the best possible use of Wolfram Language superfunctions, etc.). And perhaps you’ll find it helpful to use things like DataStructure constructs. But ultimately if you really want your code to run absolutely as fast as your computer can make it, you’ll probably want to set it up so that it can be compiled using the Wolfram Compiler, directly to LLVM code and then machine code.

\n

We’ve been developing the Wolfram Compiler for many years, and it’s becoming steadily more capable (and efficient). And for example it’s become increasingly important in our own internal development efforts. In the past, when we wrote critical inner-loop internal code for the Wolfram Language, we did it in C. But in the past few years we’ve almost completely transitioned instead to writing pure Wolfram Language code that we then compile with the Wolfram Compiler. And the result of this has been a dramatically faster and more reliable development pipeline for writing inner-loop code.

\n

Ultimately what the Wolfram Compiler needs to do is to take the code you write and align it with the low-level capabilities of your computer, figuring out what low-level data types can be used for what, etc. Some of this can be done automatically (using all sorts of fancy symbolic and theorem-proving-like techniques). But some needs to be based on collaboration between the programmer and the compiler. And in Version 14.1 we’re adding several important ways to enhance that collaboration.

\n

The first thing is that it’s now easy to get access to information the compiler has. For example, here’s the type declaration the compiler has for the built-in function Dimensions:

\n
\n
\n

\n

And here’s the source code of the actual implementation the compiler is using for Dimensions, calling its intrinsic low-level internal functions like CopyTo:

\n

Compiler source code

\n

A function like Map has a vastly more complex set of type declarations:

\n
\n
\n

\n

For types themselves, CompilerInformation lets you see their type hierarchy:

\n
\n
\n

\n

And for data structure types, you can do things like see the fields they contain, and the operations they support:

\n
\n
\n

\n

And, by the way, something new in Version 14.1 is the function OperationDeclaration which lets you declare operations to add to a data structure type you’ve defined.

\n

Once you actually start running the compiler, a convenient new feature in Version 14.1 is a detailed progress monitor that lets you see what the compiler is doing at each step:

\n
\n
\n

\n

As we said, the key to compilation is figuring out how to align your code with the low-level capabilities of your computer. The Wolfram Language can do arbitrary symbolic operations. But many of those don’t align with low-level capabilities of your computer, and can’t meaningfully be compiled. Sometimes those failures to align are the result of sophistication that’s possible only with symbolic operations. But sometimes the failures can be avoided if you “unpack” things a bit. And sometimes the failures are just the result of programming mistakes. And now in Version 14.1 the Wolfram Compiler is starting to be able to annotate your code to show where the misalignments are happening, so you can go through and figure out what to do with them. (It’s something that’s uniquely possible because of the symbolic structure of the Wolfram Language and even more so of Wolfram Notebooks.)

\n

Here’s a very simple example:

\n

Misalignment error message

\n

In compiled code, Sin expects a numerical argument, so a Boolean argument won’t work. Clicking the Source button lets you see where specifically something went wrong:

\n

Error source

\n

If you have several levels of definitions, the Source button will show you the whole chain:

\n
\n
\n

\n

Here’s a slightly more complicated piece of code, in which the specific place where there’s a problem is highlighted:

\n
\n
\n

\n

In a typical workflow you might start from pure Wolfram Language code, without Typed and other compilation information. Then you start adding such information, repeatedly trying the compilation, seeing what issues arise, and fixing them. And, by the way, because it’s completely efficient to call small pieces of compiled code within ordinary Wolfram Language code, it’s common to start by annotating and compiling the “innermost inner loops” in your code, and gradually “working outwards”.

\n

But, OK, let’s say you’ve successfully compiled a piece of code. Most of the time it’ll handle certain cases, but not others (for example, it might work fine with machine-precision numbers, but not be capable of handling arbitrary precision). By default, compiled code that’s running is set up to generate a message and revert to ordinary Wolfram Language evaluation if it can’t handle something:

\n
\n
\n

\n

But in Version 14.1 there a new option CompilerRuntimeErrorAction that lets you specify an action to take (or, in general, a function to apply) whenever a runtime error occurs. A setting of None aborts the whole computation if there’s a runtime error:

\n
\n
\n

\n

Even Smoother Integration with External Languages

\n

Let’s say there’s some functionality you want to use, but the only implementation you have is in a package in some external language, like Python. Well, it’s now basically seamless to work with such functionality directly in the Wolfram Language—plugging into the whole symbolic framework and functionality of the Wolfram Language.

\n

As a simple example, here’s a function that uses the Python package faker to produce a random sentence (which of course would also be straightforward to do directly in Wolfram Language):

\n
\n
\n

\n

The first time you run RandomSentence, the progress monitor will show you all sorts of messy things happening under the hood, as Python versions get loaded, dependencies get set up, and so on. But the point is that it’s all automatic, and so you don’t have to worry about it. And in the end, out pops the answer:

\n
\n
\n

\n

And if you run the function again, all the setup will already have been done, and the answer will pop out immediately:

\n
\n
\n

\n

An important piece of automation here is the conversion of data types. One of the great things about the Wolfram Language is that it has fully integrated symbolic representations for a very wide range of things—from videos to molecules to IP addresses. And when there are standard representations for these things in a language like Python, we’ll automatically convert to and from them.

\n

But particularly with more sophisticated packages, there’ll be a need to let the package deal with its own “external objects” that are basically opaque to the Wolfram Language, but can be handled as atomic symbolic constructs there.

\n

For example, let’s say we’ve started a Python external package chess (and, yes, there’s a paclet in the Wolfram Paclet Repository that has considerably more chess functionality):

\n
\n
\n

\n

Now the state of a chessboard can be represented by an external object:

\n
\n
\n

\n

We can define a function to plot the board:

\n
\n
\n

\n

And now in Version 14.1 you can just pass your external object to the external function:

\n
\n
\n

\n

You can also directly extract attributes of the external object:

\n
\n
\n

\n

And you can call methods (here to make a chess move), changing the state of the external object:

\n
\n
\n

\n

Here’s a plot of a new board configuration:

\n
\n
\n

\n

This computes all legal moves from the current position, representing them as external objects:

\n
\n
\n

\n

Here are UCI string representations of these:

\n
\n
\n

\n

In what we’re doing here we’re immediately performing each external operation. But Version 14.1 introduces the construct ExternalOperation which lets you symbolically represent an external operation, and for example build up collections of such operations that can all be performed together in a single external evaluation. ExternalObject supports various built-in operations for each environment. So, for example, in Python we can use Call and GetAttribute to get the symbolic representation:

\n
\n
\n

\n

If we evaluate this, all these operations will get done together in the external environment:

\n
\n
\n

\n

Standalone Wolfram Language Applications!

\n

Let’s say you’re writing an application in pretty much any programming language—and inside it you want to call Wolfram Language functionality. Well, you could always do that by using a web API served from the Wolfram Cloud. And you could also do it locally by running the Wolfram Engine. But in Version 14.1 there’s something new: a way of integrating a standalone Wolfram Language runtime right into your application. The Wolfram Language runtime is a dynamic library that you link into your program, and then call using a C-based API. How big is the runtime? Well, it depends on what you want to use in the Wolfram Language. Because we now have the technology to prune a runtime to include only capabilities needed for particular Wolfram Language code. And the result is that adding the Wolfram Language will often increase the disk requirements of your application only by a remarkably small amount—like just a few hundred megabytes or even less. And, by the way, you can distribute the Wolfram runtime as an integrated part of an application, with its users not needing their own licenses to run it.

\n

OK, so how does creating a standalone Wolfram-enabled application actually work? There’s a lot of software engineering (associated with the Wolfram Language runtime, how it’s called, etc.) under the hood. But at the level of the application programmer you only have to deal with our Standalone Applications SDK—whose interface is rather simple.

\n

As an example, here’s the C code part of a standalone application that uses the Wolfram Language to identify what (human) language a piece of text is in. The program here takes a string of text on its command line, then runs the Wolfram Language LanguageIdentify function on it, and then prints a string giving the result:

\n

C code using Wolfram Language

\n

If we ignore issues of pruning, etc. we can compile this program just with (and, yes, the file paths are necessarily a bit long):

\n

Compiled C program

\n

Now we can run the resulting executable directly from the command line—and it’ll act just like any other executable, even though inside it’s got all the power of a Wolfram Language runtime:

\n

Command-line executable

\n

If we look at the C program above, it basically begins just by starting the Wolfram Language runtime (using WLR_SDK_START_RUNTIME()). But then it takes the string (argv[1]) from the command line, embeds it in a Wolfram Language expression LanguageIdentify[string], evaluates this expression, and extracts a raw string from the result.

\n

The functions, etc. that are involved here are part of the new Expression API supported by the Wolfram Language runtime dynamic library. The Expression API provides very clean capabilities for building up and taking apart Wolfram Language expressions from C. There are functions like wlr_Symbol(\"string\") that form symbols, as well as macros like wlr_List(elem1, elem2, …) and wlr_E(head, arg1, arg2, …) that build up lists and general expressions. Then there’s the function wlr_Eval(expr) that calls the Wolfram Language evaluator. With functions like wlr_StringData(expr, &result, …) you can then extract content from expressions (here the characters in a string) and put it into C data structures.

\n

How does the Expression API relate to WSTP? WSTP (“Wolfram Symbolic Transfer Protocol”) is our protocol for transferring symbolic expressions between processes. The Expression API, on the other hand, operates within a single process, providing the “glue” that connects C code to expressions in the Wolfram Language runtime.

\n

One example of a real-world use of our new Standalone Applications technology is the LSPServer application that will soon be in full distribution. LSPServer started from a pure (though somewhat lengthy) Wolfram Language paclet that provides Language Server Protocol services for annotating Wolfram Language code in programs like Visual Studio Code. To build the LSPServer standalone application we just wrote a tiny C program that calls the paclet, then compiled this and linked it against our Standalone Applications SDK. Along the way (using tools that we’re planning to soon make available)—and based on the fact that only a small part of the full functionality of the Wolfram Language is needed to support LSPServer—we pruned the Wolfram Language runtime, in the end getting a complete LSPServer application that’s only about 170 MB in size, and that shows no outside signs of having Wolfram Language functionality inside.

\n

And Yet More…

\n

Is that all? Well, no. There’s more. Like new formatting of Root objects (yes, I was frustrated with the old one). Or like a new drag-and-drop-to-answer option for QuestionObject quizzes. Or like all the documentation we’ve added for new types of entities and interpreters.

\n

In addition, there’s also the continual stream of new data that we’ve curated, or that’s flowed in real time into the Wolfram Knowledgebase. And beyond the core Wolfram Language itself, there’ve also been lots of functions added to the Wolfram Function Repository, lots of paclets added to the Wolfram Language Paclet Repository, not to mention new entries in the Wolfram Neural Net Repository, Wolfram Data Repository, etc.

\n

Yes, as always it’s been a lot of work. But today it’s here, and we’re proud of it: Version 14.1!

\n

\n

\n\n

\n", + "category": "Mathematica", + "link": "https://writings.stephenwolfram.com/2024/07/yet-more-new-ideas-and-new-functions-launching-version-14-1-of-wolfram-language-mathematica/", + "creator": "Stephen Wolfram", + "pubDate": "Wed, 31 Jul 2024 21:53:02 +0000", + "enclosure": "https://content.wolfram.com/sites/43/2024/07/manipulatevideo.mp4", + "enclosureType": "video/mp4", + "image": "https://content.wolfram.com/sites/43/2024/07/manipulatevideo.mp4", + "id": "", + "language": "en", + "folder": "", + "feed": "wolfram", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "2caa8a42eeba4573c46766c995597c5f", "highlights": [] }, { @@ -109,28 +351,6 @@ "hash": "4c1ebd40f436b92b2452a5995b89f1c9", "highlights": [] }, - { - "title": "Aggregation and Tiling as Multicomputational Processes", - "description": "\"\"The Importance of Multiway Systems It’s all about systems where there can in effect be many possible paths of history. In a typical standard computational system like a cellular automaton, there’s always just one path, defined by evolution from one state to the next. But in a multiway system, there can be many possible next […]", - "content": "\"\"

\"Aggregation

\n

The Importance of Multiway Systems

\n

It’s all about systems where there can in effect be many possible paths of history. In a typical standard computational system like a cellular automaton, there’s always just one path, defined by evolution from one state to the next. But in a multiway system, there can be many possible next states—and thus many possible paths of history. Multiway systems have a central role in our Physics Project, particularly in connection with quantum mechanics. But what’s now emerging is that multiway systems in fact serve as a quite general foundation for a whole new “multicomputational” paradigm for modeling.

\n

My objective here is twofold. First, I want to use multiway systems as minimal models for growth processes based on aggregation and tiling. And second, I want to use this concrete application as a way to develop further intuition about multiway systems in general. Elsewhere I have explored multiway systems for strings, multiway systems based on numbers, multiway Turing machines, multiway combinators, multiway expression evaluation and multiway systems based on games and puzzles. But in studying multiway systems for aggregation and tiling, we’ll be dealing with something that is immediately more physical and tangible.

\n

When we think of “growth by aggregation” we typically imagine a “random process” in which new pieces get added “at random” to something. But each of these “random possibilities” in effect defines a different path of history. And the concept of a multiway system is to capture all those possibilities together. In a typical random (or “stochastic”) model one’s just tracing a single path of history, and one imagines one doesn’t have enough information to say which path it will be. But in a multiway system one’s looking at all the paths. And in doing so, one’s in a sense making a model for the “whole story” of what can happen.

\n

The choice of a single path can be “nondeterministic”. But the whole multiway system is deterministic. And by studying that “deterministic whole” it’s often possible to make useful, quite general statements.

\n

One can think of a particular moment in the evolution of a multiway system as giving something like an ensemble of states of the kind studied in statistical mechanics. But the general concept of a multiway system, with its discrete branching at discrete steps, depends on a level of fundamental discreteness that’s quite unfamiliar from traditional statistical mechanics—though is perfectly straightforward to define in a computational, or even mathematical, way.

\n

For aggregation it’s easy enough to set up a minimal discrete model—at least if one allows explicit randomness in the model. But a major point of what we’ll do here is to “go above” that randomness, setting up our model in terms of a whole, deterministic multiway system.

\n

What can we learn by looking at this whole multiway system? Well, for example, we can see whether there’ll always be growth—whatever the random choices may be—or whether the growth will sometimes, or even always, stop. And in many practical applications (think, for example, tumors) it can be very important to know whether growth always stops—or through what paths it can continue.

\n

A lot of what we’ll at first do here involves seeing the effect of local constraints on growth. Later on, we’ll also look at effects of geometry, and we’ll study how objects of different shapes can aggregate, or ultimately tile.

\n

The models we’ll introduce are in a sense very minimal—combining the simplest multiway structures with the simplest spatial structures. And with this minimality it’s almost inevitable that the models will show up as idealizations of all sorts of systems—and as foundations for good models of these systems.

\n

At first, multiway systems can seem rather abstract and difficult to grasp—and perhaps that’s inevitable given our human tendency to think sequentially. But by seeing how multiway systems play out in the concrete case of growth processes, we get to build our intuition and develop a more grounded view—that will stand us in good stead in exploring other applications of multiway systems, and in general in coming to terms with the whole multicomputational paradigm.

\n

The Simplest Case

\n

It’s the ultimate minimal model for random discrete growth (often called the Eden model). On a square grid, start with one black cell, then at each step randomly attach a new black cell somewhere onto the growing “cluster”:

\n
\n
\n

\n

After 10,000 steps we might get:

\n
\n
\n

\n

But what are all the possible things that can happen? For that, we can construct a multiway system:

\n
\n
\n

\n

A lot of these clusters differ only by a trivial translation; canonicalizing by translation we get

\n
\n
\n

\n

or after another step:

\n
\n
\n

\n

If we also reduce out rotations and reflections we get

\n
\n
\n

\n

or after another step:

\n
\n
\n

\n

The set of possible clusters after t steps are just the possible polyominoes (or “square lattice animals”) with t cells. The number of these for successive t is

\n
\n
\n

\n

growing roughly like kt for large t, with k a little larger than 4:

\n
\n
\n

\n

By the way, canonicalization by translation always reduces the number of possible clusters by a factor of t. Canonicalization by rotation and reflection can reduce the number by a factor of 8 if the cluster has no symmetry (which for large clusters becomes increasingly likely), and by a smaller factor the more symmetry the cluster has, as in:

\n
\n
\n

\n

With canonicalization, the multiway graph after 7 steps has the form

\n
\n
\n

\n

and it doesn’t look any simpler with alternative rendering:

\n
\n
\n

\n

If we imagine that at each step, cells are added with equal probability at every possible position on the cluster, or equivalently that all outgoing edges from a given cluster in the uncanonicalized multiway graph are followed with equal probability, then we can get a distribution of probabilities for the distinct canonical clusters obtained—here shown after 7 steps:

\n
\n
\n

\n

One feature of the large random cluster we saw at the beginning is that it has some holes in it. Clusters with holes start developing after 7 steps, with the smallest being:

\n
\n
\n

\n

This cluster can be reached through a subset of the multiway system:

\n
\n
\n

\n

And in fact in the limit of large clusters, the probability for there to be a hole seems to approach 1—even though the total fraction of area covered by holes approaches 0.

\n

One way to characterize the “space of possible clusters” is to create a branchial graph by connecting every pair of clusters that have a common ancestor one step back in the multiway graph:

\n
\n
\n

\n

The connectedness of all these graphs reflects the fact that with the rule we’re using, it’s always possible at any step to go from one cluster to another by a sequence of delete-one-cell/add-one-cell changes.

\n

The branchial graphs here also show a 4-fold symmetry resulting from the symmetry of the underlying lattice. Canonicalizing the states, we get smaller branchial graphs that no longer show any such symmetry:

\n
\n
\n

\n

Totalistically Constrained Growth (4-Cell Neighborhoods)

\n

With the rule we’ve been discussing so far, a new cell to be attached can be anywhere on a cluster. But what if we limit growth, by requiring that new cells must have certain numbers of existing cells around them? Specifically, let’s consider rules that look at the neighbors around any given position, and allow a new cell there only if there are specified numbers of existing cells in the neighborhood.

\n

Starting with a cross of black cells, here are some examples of random clusters one gets after 20 steps with all possible rules of this type (the initial “4” designates that these are 4-neighbor rules):

\n
\n
\n

\n

Rules that don’t allow new cells to end up with just one existing neighbor can only fill in corners in their initial conditions, and can’t grow any further. But any rule that allows growth with only one existing neighbor produces clusters that keep growing forever. And here are some random examples of what one can get after 10,000 steps:

\n
\n
\n

\n

The last of these is the unconstrained (Eden model) rule we already discussed above. But let’s look more carefully at the first case—where there’s growth only if a new cell will end up with exactly one neighbor. The canonicalized multiway graph in this case is:

\n
\n
\n

\n

The possible clusters here correspond to polyominoes that are “always one cell wide” (i.e. have no 2×2 blocks), or, equivalently, have perimeter 2t + 2 at step t. The number of such canonicalized clusters grows like:

\n
\n
\n

\n

This is an increasing fraction of the total number of polyominoes—implying that most large polyominoes take this “spindly” form.

\n

A new feature of a rule with constraints is that not all locations around a cluster may allow growth. Here is a version of the multiway system above, with cells around each cluster annotated with green if new growth is allowed there, and red if it never can be:

\n
\n
\n

\n

In a larger random cluster, we can see that with this rule, most of the interior is “dead” in the sense that the constraint of the rule allows no further growth there:

\n
\n
\n

\n

By the way, the clusters generated by this rule can always be directly represented by their “skeleton graphs”:

\n
\n
\n

\n

Looking at random clusters for all the (grow-with-1-neighbor) rules above, we see different patterns of holes in each case:

\n
\n
\n

\n

There are altogether five types of cells being distinguished here, reflecting different neighbor configurations:

\n
\n
\n

\n

Here’s a sample cluster generated with the 4:{1,3} rule:

\n
\n
\n

\n

Cells indicated with \"\" already have too many neighbors, and so can never be added to the cluster. Cells indicated with \"\" have exactly the right number of neighbors to be added immediately. Cells indicated with \"\" don’t currently have the right number of neighbors to grow, but if neighbors are filled in, they might be able to be added. Sometimes it will turn out that when neighbors of \"\" cells get filled in, they will actually prevent the cell from being added (so that it becomes \"\")—and in the particular case shown here that happens with the 2×2 blocks of \"\" cells.

\n

The multiway graphs from the rules shown here are all qualitatively similar, but there are detailed differences. In particular, at least for many of the rules, an increasing number of states are “missing” relative to what one gets with the grow-in-all-cases 4:{1,2,3,4} rule—or, in other words, there are an increasing number of polyominoes that can’t be generated given the constraints:

\n
\n
\n

\n

The first polyomino that can’t be reached (which occurs at step 4) is:

\n
\n
\n

\n

At step 6 the polyominoes that can’t be reached for rules 4:{1,3} and 4:{1,3,4} are

\n
\n
\n

\n

while for 4:{1} and 4:{1,4} the additional polyomino

\n
\n
\n

\n

can also not be reached.

\n

At step 8, the polyomino

\n
\n
\n

\n

is reachable with 4:{1} and 4:{1,3} but not with 4:{1,4} and 4:{1,3,4}.

\n

Of some note is that none of the rules that exclude polyominoes can reach:

\n
\n
\n

\n

Totalistically Constrained Growth (8-Cell Neighborhoods)

\n

What happens if one considers diagonal as well orthogonal neighbors, giving a total of 8 neighbors around a cell? There are 256 possible rules in this case, corresponding to the possible subsets of Range[8]. Here are samples of what they do after 200 steps, starting from an initial cluster:

\n
\n
\n

\n

Two cases that at least initially show growth here are (the “8” designates that these are 8-neighbor rules):

\n
\n
\n

\n

In the {2} case, the multiway graph begins with:

\n
\n
\n

\n

One might assume that every branch in this graph would continue forever, and that growth would never “get stuck”. But it turns out that after 9 steps the following cluster is generated:

\n
\n
\n

\n

And with this cluster, no further growth is possible: no positions around the boundary have exactly 2 neighbors. In the multiway graph up to 10 steps, it turns out this is the only “terminal cluster” that can be generated—out of a total of 1115 possible clusters:

\n
\n
\n

\n

So how is that terminal cluster reached? Here’s the fragment of multiway graph that leads to it:

\n
\n
\n

\n

If we don’t prune off all the ways to “go astray”, the fragment appears as part of a larger multiway graph:

\n
\n
\n

\n

And if one follows all paths in the unpruned (and uncanonicalized) multiway graph at random (i.e. at each step, one chooses each branch with equal probability), it turns out that the probability of ever reaching this particular terminal cluster is just:

\n
\n
\n

\n

(And the fact that this number is fairly small implies that the system is far from confluent; there are many paths that, for example, don’t converge to the fixed point corresponding to this terminal cluster.)

\n

If we keep going in the evolution of the multiway system, we’ll reach other terminal clusters; after 12 steps the following have appeared:

\n
\n
\n

\n

For the {3} rule above, the multiway system takes a little longer to “get going”:

\n
\n
\n

\n

Once again there are terminal clusters where the system gets stuck; the first of them appears at step 14:

\n
\n
\n

\n

And also once again the terminal cluster appears as an isolated node in the whole multiway system:

\n
\n
\n

\n

The fragment of multiway graph that leads to it is:

\n
\n
\n

\n

So far we’ve been finding terminal clusters by waiting for them to appear in the evolution of the multiway system. But there’s another approach, similar to what one might use in filling in something like a tiling. The idea is that every cell in a terminal cluster must have neighbors that don’t allow further growth. In other words, the terminal cluster must consist of certain “local tiles” for which the constraints don’t allow growth. But what configurations of local tiles are possible? To determine this, we turn the matching conditions for the tiles into logical expressions whose variables are True and False depending on whether particular positions in the template do or do not contain cells in the cluster. By solving the satisfiability problem for the combination of these logical expressions, one finds configurations of cells that could conceivably correspond to terminal clusters.

\n

Following this procedure for the {2} rules with regions of up to 6×6 cells we find:

\n
\n
\n

\n

But now there’s an additional constraint. Assuming one starts from a connected initial cluster, any subsequent cluster generated must also be connected. Removing the non-connected cases we get:

\n
\n
\n

\n

So given these terminal clusters, what initial conditions can lead to them? To determine this we effectively have to invert the aggregation process—giving in the end a multiway graph that includes all initial conditions that can generate a given terminal cluster. For the smallest terminal cluster we get:

\n
\n
\n

\n

Our 4-cell “T” initial condition appears here—but we see that there are also even smaller 2-cell initial conditions that lead to the same terminal cluster.

\n

For all the terminal clusters we showed before, we can construct the multiway graphs starting with the minimal initial clusters that lead to them:

\n
\n
\n

\n

For terminal clusters like

\n
\n
\n

\n

there’s no nontrivial multiway system to show, since these clusters can only appear as initial conditions; they can never be generated in the evolution.

\n

There are quite a few small clusters that can only appear as initial conditions, and do not have preimages under the aggregation rule. Here are the cases that fit in a 3×3 region:

\n
\n
\n

\n

The case of the {3} rule is fairly similar to the {2} rule. The possible terminal clusters up to 5×5 are:

\n
\n
\n

\n

However, most of these have only a fairly limited set of possible preimages:

\n
\n
\n

\n

For example we have:

\n
\n
\n

\n

And indeed beyond the (size-17) example we already showed above, no other terminal clusters that can be generated from a T initial condition appear here. Sampling further, however, additional terminal clusters appear (beginning at size 25):

\n
\n
\n

\n

The fragments of multiway graphs for the first few of these are:

\n
\n
\n

\n

Random Evolution

\n

We’ve seen above that for the rules we’ve been investigating, terminal clusters are quite rare among possible states in the multiway system. But what happens if we just evolve at random? How often will we wind up with a terminal cluster? When we say “evolve at random”, what we mean is that at each step we’re going to look at all possible positions where a new cell could be added to the cluster that exists so far, and then we’re going to pick with equal probability at which of these to actually add the new cell.

\n

For the 8:{3} rule something surprising happens. Even though terminal clusters are rare in its multiway graph, it turns out that regardless of its initial conditions, it always eventually reaches a terminal cluster—though it often takes a while. And here, for example, are a few possible terminal clusters, annotated with the number of steps it took to reach them (which is also equal to the number of cells they contain):

\n
\n
\n

\n

The distribution of the number of steps to termination seems to be very roughly exponential (here based on a sample of 10,000 random cases)—with mean lifetime around 2300 and half-life around 7400:

\n
\n
\n

\n

Here’s an example of a large terminal cluster—that takes 21,912 steps to generate:

\n
\n
\n

\n

And here’s a map showing when growth in different parts of this cluster occurred (with blue being earliest and red being latest):

\n
\n
\n

\n

This picture suggests that different parts of the cluster “actively grow” at different times, and if we look at a “spacetime” plot of where growth occurs as a function of time, we can confirm this:

\n
\n
\n

\n

And indeed what this suggests is that what’s happening is that different parts of the cluster are at first “fertile”, but later inevitably “burn out”—so that in the end there are no possible positions left where growth can occur.

\n

But what shapes can the final terminal clusters form? We can get some idea by looking at a “compactness measure” (of the kind often used to study gerrymandering) that roughly gives the standard deviation of the distances from the center of each cluster to each of the cells in it. Both “very stringy” and “roughly circular” clusters are fairly rare; most clusters lie somewhere in between:

\n
\n
\n

\n

If we look not at the 8:{3} but instead at the 8:{2} rule, things are very different. Once again, it’s possible to reach a terminal cluster, as the multiway graph shows. But now random evolution almost never reaches a terminal cluster, and instead almost always “runs away” to generate an infinite cluster. The clusters generated in this case are typically much more “compact” than in the 8:{3} case

\n
\n
\n

\n

and this is also reflected in the “spacetime” version:

\n
\n
\n

\n

Parallel Growth and Causal Graphs

\n

In building up our clusters so far, we’ve always been assuming that cells are added sequentially, one at a time. But if two cells are far enough apart, we can actually add them “simultaneously”, in parallel, and end up building the same cluster. We can think of the addition of each cell as being an “event” that updates the state of the cluster. Then—just like in our Physics Project, and other applications of multicomputation—we can define a causal graph that represents the causal dependencies between these events, and then foliations of this causal graph tell us possible overall sequences of updates, including parallel.

\n

As an example, consider this sequence of states in the “always grow” 4:{1,2,3,4} rule—where at each step the cell that’s new is colored red (and we’re including the “nothing” state at the beginning):

\n
\n
\n

\n

Every transition between successive states defines an event:

\n
\n
\n

\n

There’s then causal dependence of one event on another if the cell added in the second event is adjacent to the one added in the first event. So, for example, there are causal dependencies like

\n
\n
\n

\n

and

\n
\n
\n

\n

where in the second case additional “spatially separated” cells have been added that aren’t involved in the causal dependence. Putting all the causal dependencies together, we get the complete causal graph for this evolution:

\n
\n
\n

\n

We can recover our original sequence of states by picking a particular ordering of these events (here indicated by the positions of the cells they add):

\n
\n
\n

\n

This path has the property that it always follows the direction of causal edges—and we can make that more obvious by using a different layout for the causal graph:

\n
\n
\n

\n

But in general we can use any ordering of events consistent with the causal graph. Another ordering (out of a total of 40,320 possibilities in this case) is

\n
\n
\n

\n

which gives the sequence of states

\n
\n
\n

\n

with the same final cluster configuration, but different intermediate states.

\n

But now the point is that the constraints implied by the causal graph do not require all events to be applied sequentially. Some events can be considered “spacelike separated” and so can be applied simultaneously. And in fact, any foliation of the causal graph defines a certain sequence for applying events—either sequentially or in parallel. So, for example, here is one particular foliation of the causal graph (shown with two different renderings for the causal graph):

\n
\n
\n

\n

And here is the corresponding sequence of states obtained:

\n
\n
\n

\n

And since in some slices of this foliation multiple events happen “in parallel”, it’s “faster” to get to the final configuration. (As it happens, this foliation is like a “cosmological rest frame foliation” in our Physics Project, and involves the maximum possible number of events happening on each slice.)

\n

Different foliations (and there are a total of 678,972 possibilities in this case) will give different sequences of states, but always the same final state:

\n
\n
\n

\n

Note that nothing we’ve done here depends on the particular rule we’ve used. So, for example, for the 8:{2} rule with sequence of states

\n
\n
\n

\n

the causal graph is:

\n
\n
\n

\n

It’s worth commenting that everything we’ve done here has been for particular sequences of states, i.e. particular paths in the multiway graph. And in effect what we’re doing is the analog of classical spacetime physics—tracing out causal dependencies in particular evolution histories. But in general we could look at the whole multiway causal graph, with events that are not only timelike or spacelike separated, but also branchlike separated. And if we make foliations of this graph, we’ll end up not only with “classical” spacetime states, but also “quantum” superposition states that would need to be represented by something like multispace (in which at each spatial position, there is a “branchial stack” of possible cell values).

\n

The One-Dimensional Case

\n

So far we’ve been considering aggregation processes in two dimensions. But what about one dimension? In 1D, a “cluster” just consists of a sequence of cells. The simplest rule allows a cell to be added whenever it’s adjacent to a cell that’s already there. Starting from a single cell, here’s a possible random evolution according to such a rule, shown evolving down the page:

\n
\n
\n

\n

We can also construct the multiway system for this rule:

\n
\n
\n

\n

Canonicalizing the states gives the trivial multiway graph:

\n
\n
\n

\n

But just like in the 2D case things get less trivial if there are constraints on growth. For example, assume that before placing a new cell we count the number of cells that lie either distance 1 or distance 2 away. If the number of allowed cells can only be exactly 1 we get behavior like:

\n
\n
\n

\n

The corresponding multiway system is

\n
\n
\n

\n

or after canonicalization:

\n
\n
\n

\n

The number of distinct sequences after t steps here is given by

\n
\n
\n

\n

which can be expressed in terms of Fibonacci numbers, and for large t is about .

\n

The rule in effect generates all possible Morse-code-like sequences, consisting of runs of either 2-cell (“long”) black blocks or 1-cell (“short”) black blocks, interspersed by “gaps” of single white cells.

\n

The branchial graphs for this system have the form:

\n
\n
\n

\n

Looking at random evolutions for all possible rules of this type we get:

\n
\n
\n

\n

The corresponding canonicalized multiway graphs are:

\n
\n
\n

\n

The rules we’ve looked at so far are purely totalistic: whether a new cell can be added depends only on the total number of cells in its neighborhood. But (much like, for example, in cellular automata) it’s also possible to have rules where whether one can add a new cell depends on the complete configuration of cells in a neighborhood. Mostly, however, such rules seem to behave very much like totalistic ones.

\n

Other generalizations include, for example, rules with multiple “colors” of cells, and rules that depend either on the total number of cells of different colors, or their detailed configurations.

\n

The Three-Dimensional Case

\n

The kind of analysis we’ve done for 2D and 1D aggregation systems can readily be extended to 3D. As a first example, consider a rule in which cells can be added along each of the 6 coordinate directions in a 3D grid whenever they are adjacent to an existing cell. Here are some typical examples of random clusters formed in this case:

\n
\n
\n

\n

Taking successive slices through the first of these (and coloring by “age”) we get:

\n
\n
\n

\n

If we allow a cell to be added only when it is adjacent to just one existing cell (corresponding to the rule 6:{1}) we get clusters that from the outside look almost indistinguishable

\n
\n
\n

\n

but which have an “airier” internal structure:

\n
\n
\n

\n

Much like in 2D, with 6 neighbors, there can’t be unbounded growth unless cells can be added when there is just one cell in the neighborhood. But in analogy to what happens in 2D, things get more complicated when we allow “corner adjacency” and have a 26-cell neighborhood.

\n

If cells can be added whenever there’s at least one adjacent cell, the results are similar to the 6-neighbor case, except that now there can be “corner-adjacent outgrowths”

\n
\n
\n

\n

and the whole structure is “still airier”:

\n
\n
\n

\n

Little qualitatively changes for a rule like 26:{2} where growth can occur only with exactly 2 neighbors (here starting with a 3D dimer):

\n
\n
\n

\n
\n
\n

\n

But the general question of when there is growth, and when not, is quite complicated and subtle. In particular, even with a specific rule, there are often some initial conditions that can lead to unbounded growth, and others that cannot.

\n

Sometimes there is growth for a while, but then it stops. For example, with the rule 26:{9}, one possible path of evolution from a 3×3×3 block is:

\n
\n
\n

\n

The full multiway graph in this case terminates, confirming that no unbounded growth is ever possible:

\n
\n
\n

\n

With other initial conditions, however, this rule can grow for longer (here shown every 10 steps):

\n
\n
\n

\n

And from what one can tell, all rules 26:{n} lead to unbounded growth for , and do not for .

\n

Polygonal Shapes

\n

So far, we’ve been looking at “filling in cells” in grids—in 2D, 1D and 3D. But we can also look at just “placing tiles” without a grid, with each new tile attaching edge to edge to an existing tile.

\n

For square tiles, there isn’t really a difference:

\n
\n
\n

\n

And the multiway system is just the same as for our original “grow anywhere” rule on a 2D grid:

\n
\n
\n

\n

Here’s now what happens for triangular tiles:

\n
\n
\n

\n

The multiway graph now generates all polyiamonds (triangular polyforms):

\n
\n
\n

\n

And since equilateral triangles can tessellate in a regular lattice, we can think of this—like the square case—as “filling in cells in a lattice” rather than just “placing tiles”. Here are some larger examples of random clusters in this case:

\n
\n
\n

\n

Essentially the same happens with regular hexagons:

\n
\n
\n

\n

The multiway graph generates all polyhexes:

\n
\n
\n

\n

Here are some examples of larger clusters—showing somewhat more “tendrils” than the triangular case:

\n
\n
\n

\n

And in an “effectively lattice” case like this we could also go on and impose constraints on neighborhood configurations, much as we did in earlier sections above.

\n

But what happens if we consider shapes that do not tessellate the plane—like regular pentagons? We can still “sequentially place tiles” with the constraint that any new tile can’t overlap an existing one. And with this rule we get for example:

\n
\n
\n

\n

Here are some “randomly grown” larger clusters—showing all sorts of irregularly shaped interstices inside:

\n
\n
\n

\n

(And, yes, generating such pictures correctly is far from trivial. In the “effectively lattice” case, coincidences between polygons are fairly easy to determine exactly. But in something like the pentagon case, doing so requires solving equations in a high-degree algebraic number field.)

\n

The multiway graph, however, does not show any immediately obvious differences from the ones for “effectively lattice” cases:

\n
\n
\n

\n

It makes it slightly easier to see what’s going on if we riffle the results on the last step we show:

\n
\n
\n

\n

The branchial graphs in this case have the form:

\n
\n
\n

\n

Here’s a larger cluster formed from pentagons:

\n
\n
\n

\n

And remember that the way this is built is sequentially to add one pentagon at each step by testing every “exposed edge” and seeing in which cases a pentagon will “fit”. As in all our other examples, there is no preference given to “external” versus “internal” edges.

\n

Note that whereas “effectively lattice” clusters always eventually fill in all their holes, this isn’t true for something like the pentagon case. And in this case it appears that in the limit, about 28% of the overall area is taken up by holes. And, by the way, there’s a definite “zoo” of at least small possible holes, here plotted with their (logarithmic) probabilities:

\n
\n
\n

\n

So what happens with other regular polygons? Here’s an example with octagons (and in this case the limiting total area taken up by holes is about 35%):

\n
\n
\n

\n

And, by the way, here’s the “zoo of holes” in this case:

\n
\n
\n

\n

With pentagons, it’s pretty clear that difficult-to-resolve geometrical situations will arise. And one might have thought that octagons would avoid these. But there are still plenty of strange “mismatches” like

\n
\n
\n

\n

that aren’t easy to characterize or analyze. By the way, one should note that any time a “closed hole” is formed, the vectors corresponding to the edges that form its boundary must sum to zero—in effect defining an equation.

\n

When the number of sides in the regular polygon gets large, our clusters will approximate circle packings. Here’s an example with 12-gons:

\n
\n
\n

\n

But of course because we’re insisting on adding one polygon at a time, the resulting structure is much “airier” than a true circle packing—of the kind that would be obtained (at least in 2D) by “pushing on the edges” of the cluster.

\n

Polyomino Tilings

\n

In the previous section we considered “sequential tilings” constructed from regular polygons. But the methods we used are quite general, and can be applied to sequential tilings formed from any shape—or shapes (or, at least, any shapes for which “attachment edges” can be identified).

\n

As a first example, consider a domino or dimer shape—which we assume can be oriented both vertically and horizontally:

\n
\n
\n

\n

Here’s a somewhat larger cluster formed from dimers:

\n
\n
\n

\n

Here’s the canonicalized multiway graph in this case:

\n
\n
\n

\n

And here are the branchial graphs:

\n
\n
\n

\n

So what about other polyomino shapes? What happens when we try to sequentially tile with these—effectively making “polypolyominoes”?

\n

Here’s an example based on an L-shaped polyomino:

\n
\n
\n

\n

Here’s a larger cluster

\n
\n
\n

\n

and here’s the canonicalized multiway graph after just 1 step

\n
\n
\n

\n

and after 2 steps:

\n
\n
\n

\n

The only other 3-cell polyomino is the tromino:

\n
\n
\n

\n
\n
\n

\n

(For dimers, the limiting fraction of area covered by holes seems to be about 17%, while for L and tromino polyominoes, it’s about 27%.)

\n

Going to 4 cells, there are 5 possible polyominoes—and here are samples of random clusters that can be built with them (note that in the last case shown, we require only that “subcells” of the 2×2 polyomino must align):

\n
\n
\n

\n

The corresponding multiway graphs are:

\n
\n
\n

\n

Continuing for more steps in a few cases:

\n
\n
\n

\n

Some polyominoes are “more awkward” to fit together than others—so these typically give clusters of “lower density”:

\n
\n
\n

\n

So far, we’ve always considered adding new polyominoes so that they “attach” on any “exposed edge”. And the result is that we can often get long “tendrils” in our clusters of polyominoes. But an alternative strategy is to try to add polyominoes as “compactly” as possible, in effect by adding successive “rings” of polyominoes (with “older” rings here colored bluer):

\n
\n
\n

\n

In general there are many ways to add these rings, and eventually one will often get stuck, unable to add polyominoes without leaving holes—as indicated by the red annotation here:

\n
\n
\n

\n

Of course, that doesn’t mean that if one was prepared to “backtrack and try again”, one couldn’t find a way to extend the cluster without leaving holes. And indeed for the polyomino we’re looking at here it’s perfectly possible to end up with “perfect tilings” in which no holes are left:

\n
\n
\n

\n

In general, we could consider all sorts of different strategies for growing clusters by adding polyominoes “in parallel”—just like in our discussion of causal graphs above. And if we add polyominoes “a ring at a time” we’re effectively making a particular choice of foliation—in which the successive “ring states” turn out be directly analogous to what we call “generational states” in our Physics Project.

\n

If we allow holes (and don’t impose other constraints), then it’s inevitable that—just with ordinary, sequential aggregation—we can grow an unboundedly large cluster of polyominoes of any shape, just by always attaching one edge of each new polyomino to an “exposed” edge of the existing cluster. But if we don’t allow holes, it’s a different story—and we’re talking about a traditional tiling problem, where there are ultimately cases where tiling is impossible, and only limited-size clusters can be generated.

\n

As it happens, all polyominoes with 6 or fewer cells do allow infinite tilings. But with 7 cells the following do not:

\n
\n
\n

\n

It’s perfectly possible to grow random clusters with these polyominoes—but they tend not to be at all compact, and to have lots of holes and tendrils:

\n
\n
\n

\n

So what happens if we try to grow clusters in rings? Here are all the possible ways to “surround” the first of these polyominoes with a “single ring”:

\n
\n
\n

\n

And it turns out in every single case, there are edges (indicated here in red) where the cluster can’t be extended—thereby demonstrating that no infinite tiling is possible with this particular polyomino.

\n

By the way, much like we saw with constrained growth on a grid, it’s possible to have “tiling regions” that can extend only a certain limited distance, then always get stuck.

\n

It’s worth mentioning that we’ve considered here the case of single polyominoes. It’s also possible to consider being able to add a whole set of possible polyominoes—“Tetris style”.

\n

Nonperiodic Tilings

\n

We’ve looked at polyominoes—and shapes like pentagons—that don’t tile the plane. But what about shapes that can tile the plane, but only nonperiodically? As an example, let’s consider Penrose tiles. The basic shapes of these tiles are

\n
\n
\n

\n

though there are additional matching conditions (implicitly indicated by the arrows on each tile), which can be enforced either by putting notches in the tiles or by decorating the tiles:

\n
\n
\n

\n

Starting with these individual tiles, we can build up a multiway system by attaching tiles wherever the matching rules are satisfied (note that all edges of both tiles are the same length):

\n
\n
\n

\n

So how can we tell that these tiles can form a nonperiodic tiling? One approach is to generate a multiway system in which at successive steps we surround clusters with rings in all possible ways:

\n
\n
\n

\n

Continuing for another step we get:

\n
\n
\n

\n

Notice that here some of the branches have died out. But the question is what branches exist that will continue forever, and thus lead to an infinite tiling? To answer this we have to do a bit of analysis.

\n

The first step is to see what possible “rings” can have formed around the original tile. And we can read all of these off from the multiway graph:

\n
\n
\n

\n

But now it’s convenient to look not at possible rings around a tile, but instead at possible configurations of tiles that can surround a single vertex. There turns out to be the following limited set:

\n
\n
\n

\n

The last two of these configurations have the feature that they can’t be extended: no tile can be added on the center of their “blue sides”. But it turns out that all the other configurations can be extended—though only to make a nested tiling, not a periodic one.

\n

And a first indication of this is that larger copies of tiles (“supertiles”) can be drawn on top of the first three configurations we just identified, in such a way that the vertices of the supertiles coincide with vertices of the original tiles:

\n
\n
\n

\n

And now we can use this to construct rules for a substitution system:

\n
\n
\n

\n

Applying this substitution system builds up a nested tiling that can be continued forever:

\n
\n
\n

\n

But is such a nested tiling the only one that is possible with our original tiles? We can prove that it is by showing that every tile in every possible configuration occurs within a supertile. We can pull out possible configurations from the multiway system—and then in each case it turns out that we can indeed find a supertile in which the original tile occurs:

\n
\n
\n

\n

And what this all means is that the only infinite paths that can occur in the multiway system are ones that correspond to nested tilings; all other paths must eventually die out.

\n

The Penrose tiling involves two distinct tiles. But in 2022 it was discovered that—if one’s allowed to flip the tile over—just a single (“hat”) tile is sufficient to force a nonperiodic tiling:

\n
\n
\n

\n

The full multiway graph obtained from this tile (and its flip-over) is complicated, but many paths in it lead (at least eventually) to “dead ends” which cannot be further extended. Thus, for example, the following configurations—which appear early in the multiway graph—all have the property that they can’t occur in an infinite tiling:

\n
\n
\n

\n

In the first case here, we can successively add a few rings of tiles:

\n
\n
\n

\n

But after 7 rings, there is a “contradiction” on the boundary, and no further growth is possible (as indicated by the red annotations):

\n
\n
\n

\n

Having eliminated cases that always lead to “dead ends” the resulting simplified multiway graph effectively includes all joins between hat tiles that can ultimately lead to surviving configurations:

\n
\n
\n

\n

Once again we can define a supertile transformation

\n
\n
\n

\n

where the region outlined in red can potentially overlap another supertile. Now we can construct a multiway graph for the supertile (in its “bitten out” and full variant)

\n
\n
\n

\n

and can see that there is a (one-to-one) map from the multiway graph for the original tiles and for these supertiles:

\n
\n
\n

\n

And now from this we can tell that there can be arbitrarily large nested tilings using the hat tile:

\n
\n
\n

\n
\n
\n

\n

Personal Notes

\n

Tucked away on page 979 of my 2002 book A New Kind of Science is a note (written in 1995) on “Generalized aggregation models”:

\n

Click to enlarge

\n

And in many ways the current piece is a three-decade-later followup to that note—using a new approach based on multiway systems.

\n

In A New Kind of Science I did discuss multiway systems (both abstractly, and in connection with fundamental physics). But what I said about aggregation was mostly in a section called “The Phenomenon of Continuity” which discussed how randomness could on a large scale lead to apparent continuity. That section began by talking about things like random walks, but went on to discuss the same minimal (“Eden model”) example of “random aggregation” that I give here. And then, in an attempt to “spruce up” my discussion of aggregation, I started looking at “aggregation with constraints”. In the main text of the book I gave just two examples:

\n

Click to enlarge

\n

But then for the footnote I studied a wider range of constraints (enumerating them much as I had cellular automata)—and noticed the surprising phenomenon that with some constraints the aggregation process could end up getting stuck, and not being able to continue.

\n

For years I carried around the idea of investigating that phenomenon further. And it was often on my list as a possible project for a student to explore at the Wolfram Summer School. Occasionally it was picked, and progress was made in various directions. And then a few years ago, with our Physics Project in the offing, the idea arose of investigating it using multiway systems—and there were Summer School projects that made progress on this. Meanwhile, as our Physics Project progressed, our tools for working with multiway systems greatly improved—ultimately making possible what we’ve done here.

\n

By the way, back in the 1990s, one of the many topics I studied for A New Kind of Science was tilings. And in an effort to determine what tilings were possible, I investigated what amounts to aggregation under tiling constraints—which is in fact even a generalization of what I consider here:

\n

Click to enlarge

\n

Thanks

\n

First and foremost, I’d like to thank Brad Klee for extensive help with this piece, as well as Nik Murzin for additional help. (Thanks also to Catherine Wolfram, Christopher Wolfram and Ed Pegg for specific pointers.) I’d like to thank various Wolfram Summer School students (and their mentors) who’ve worked on aggregation systems and their multiway interpretation in recent years: Kabir Khanna 2019 (mentors: Christopher Wolfram & Jonathan Gorard), Lina M. Ruiz 2021 (mentors: Jesse Galef & Xerxes Arsiwalla), Pietro Pepe 2023 (mentor: Bob Nachbar). (Also related are the Summer School projects on tilings by Bowen Ping 2023 and Johannes Martin 2023.)

\n

See Also

\n

Games and Puzzles as Multicomputational Systems

\n

The Physicalization of Metamathematics and Its Implications for the Foundations of Mathematics

\n

Multicomputation with Numbers: The Case of Simple Multiway Systems

\n

Multicomputation: A Fourth Paradigm for Theoretical Science

\n

Multiway Turing Machines

\n

Combinators: A Centennial View—Updating Schemes and Multiway Systems

\n

The Updating Process for String Substitution Systems

\n", - "category": "New Kind of Science", - "link": "https://writings.stephenwolfram.com/2023/11/aggregation-and-tiling-as-multicomputational-processes/", - "creator": "Stephen Wolfram", - "pubDate": "Fri, 03 Nov 2023 22:32:12 +0000", - "enclosure": "", - "enclosureType": "", - "image": "", - "id": "", - "language": "en", - "folder": "", - "feed": "wolfram", - "read": false, - "favorite": false, - "created": false, - "tags": [], - "hash": "23137555dbc08f2529a78e0fc9f0727e", - "highlights": [] - }, { "title": "How to Think Computationally about AI, the Universe and Everything", "description": "\"\"Transcript of a talk at TED AI on October 17, 2023, in San Francisco Human language. Mathematics. Logic. These are all ways to formalize the world. And in our century there’s a new and yet more powerful one: computation. And for nearly 50 years I’ve had the great privilege of building an ever taller tower […]", @@ -153,28 +373,6 @@ "hash": "de0cc1e26da8337f11766c872a35880d", "highlights": [] }, - { - "title": "Expression Evaluation and Fundamental Physics", - "description": "\"\"", - "content": "\"\"\n

\"Expression

\n

An Unexpected Correspondence

\n

Enter any expression and it’ll get evaluated:

\n
\n
\n

\n

And internally—say in the Wolfram Language—what’s going on is that the expression is progressively being transformed using all available rules until no more rules apply. Here the process can be represented like this:

\n
\n
\n

\n

We can think of the yellow boxes in this picture as corresponding to “evaluation events” that transform one “state of the expression” (represented by a blue box) to another, eventually reaching the “fixed point” 12.

\n

And so far this may all seem very simple. But actually there are many surprisingly complicated and deep issues and questions. For example, to what extent can the evaluation events be applied in different orders, or in parallel? Does one always get the same answer? What about non-terminating sequences of events? And so on.

\n

I was first exposed to such issues more than 40 years ago—when I was working on the design of the evaluator for the SMP system that was the forerunner of Mathematica and the Wolfram Language. And back then I came up with pragmatic, practical solutions—many of which we still use today. But I was never satisfied with the whole conceptual framework. And I always thought that there should be a much more principled way to think about such things—that would likely lead to all sorts of important generalizations and optimizations.

\n

Well, more than 40 years later I think we can finally now see how to do this. And it’s all based on ideas from our Physics Project—and on a fundamental correspondence between what’s happening at the lowest level in all physical processes and in expression evaluation. Our Physics Project implies that ultimately the universe evolves through a series of discrete events that transform the underlying structure of the universe (say, represented as a hypergraph)—just like evaluation events transform the underlying structure of an expression.

\n

And given this correspondence, we can start applying ideas from physics—like ones about spacetime and quantum mechanics—to questions of expression evaluation. Some of what this will lead us to is deeply abstract. But some of it has immediate practical implications, notably for parallel, distributed, nondeterministic and quantum-style computing. And from seeing how things play out in the rather accessible and concrete area of expression evaluation, we’ll be able to develop more intuition about fundamental physics and about other areas (like metamathematics) where the ideas of our Physics Project can be applied.

\n

Causal Graphs and Spacetime

\n

The standard evaluator in the Wolfram Language applies evaluation events to an expression in a particular order. But typically multiple orders are possible; for the example above, there are three:

\n
\n
\n

\n

So what determines what orders are possible? There is ultimately just one constraint: the causal dependencies that exist between events. The key point is that a given event cannot happen unless all the inputs to it are available, i.e. have already been computed. So in the example here, the evaluation event cannot occur unless the one has already occurred. And we can summarize this by “drawing a causal edge” from the event to the one. Putting together all these “causal relations”, we can make a causal graph, which in the example here has the simple form (where we include a special “Big Bang” initial event to create the original expression that we’re evaluating):

\n
\n
\n

\n

What we see from this causal graph is that the events on the left must all follow each other, while the event on the right can happen “independently”. And this is where we can start making an analogy with physics. Imagine our events are laid out in spacetime. The events on the left are “timelike separated” from each other, because they are constrained to follow one after another, and so must in effect “happen at different times”. But what about the event on the right? We can think of this as being “spacelike separated” from the others, and happening at a “different place in space” asynchronously from the others.

\n

As a quintessential example of a timelike chain of events, consider making the definition

\n
\n
\n

\n

and then generating the causal graph for the events associated with evaluating f[f[f[1]]] (i.e. Nest[f, 1, 3]):

\n
\n
\n

\n

A straightforward way to get spacelike events is just to “build in space” by giving an expression like f[1] + f[1] + f[1] that has parts that can effectively be thought of as being explicitly “laid out in different places”, like the cells in a cellular automaton:

\n
\n
\n

\n

But one of the major lessons of our Physics Project is that it’s possible for space to “emerge dynamically” from the evolution of a system (in that case, by successive rewriting of hypergraphs). And it turns out very much the same kind of thing can happen in expression evaluation, notably with recursively defined functions.

\n

As a simple example, consider the standard definition of Fibonacci numbers:

\n
\n
\n

\n

With this definition, the causal graph for the evaluation of f[3] is then:

\n
\n
\n

\n

For f[5], dropping the “context” of each event, and showing only what changed, the graph is

\n
\n
\n

\n

while for f[8] the structure of the graph is:

\n
\n
\n

\n

So what is the significance of there being spacelike-separated parts in this graph? At a practical level, a consequence is that those parts correspond to subevaluations that can be done independently, for example in parallel. All the events (or subevaluations) in any timelike chain must be done in sequence. But spacelike-separated events (or subevaluations) don’t immediately have a particular relative order. The whole graph can be thought of as defining a partial ordering for all events—with the events forming a partially ordered set (poset). Our “timelike chains” then correspond to what are usually called chains in the poset. The antichains of the poset represent possible collections of events that can occur “simultaneously”.

\n

And now there’s a deep analogy to physics. Because just like in the standard relativistic approach to spacetime, we can define a sequence of “spacelike surfaces” (or hypersurfaces in 3 + 1-dimensional spacetime) that correspond to possible successive “simultaneity surfaces” where events can consistently be done simultaneously. Put another way, any “foliation” of the causal graph defines a sequence of “time steps” in which particular collections of events occur—as in for example:

\n
\n
\n

\n

And just like in relativity theory, different foliations correspond to different choices of reference frames, or what amount to different choices of “space and time coordinates”. But at least in the examples we’ve seen so far, the “final result” from the evaluation is always the same, regardless of the foliation (or reference frame) we use—just as we expect when there is relativistic invariance.

\n

As a slightly more complex—but ultimately very similar—example, consider the nestedly recursive function:

\n
\n
\n

\n

Now the causal graph for f[12] has the form

\n
\n
\n

\n

which again has both spacelike and timelike structure.

\n

Foliations and the Definition of Time

\n

Let’s go back to our first example above—the evaluation of (1 + (2 + 2)) + (3 + 4). As we saw above, the causal graph in this case is:

\n
\n
\n

\n

The standard Wolfram Language evaluator makes these events occur in the following order:

\n
\n
\n

\n

And by applying events in this order starting with the initial state, we can reconstruct the sequence of states that will be reached at each step by this particular evaluation process (where now we’ve highlighted in each state the part that’s going to be transformed at each step):

\n
\n
\n

\n

Here’s the standard evaluation order for the Fibonacci number f[3]:

\n
\n
\n

\n

And here’s the sequence of states generated from this sequence of events:

\n
\n
\n

\n

Any valid evaluation order has to eventually visit (i.e. apply) all the events in the causal graph. Here’s the path that’s traced out by the standard evaluation order on the causal graph for f[8]. As we’ll discuss later, this corresponds to a depth-first scan of the (directed) graph:

\n
\n
\n

\n

But let’s return now to our first example. We’ve seen the order of events used in the standard Wolfram Language evaluation process. But there are actually three different orders that are consistent with the causal relations defined by the causal graph (in the language of posets, each of these is a “total ordering”):

\n
\n
\n

\n

And for each of these orders we can reconstruct the sequence of states that would be generated:

\n
\n
\n

\n

Up to this point we’ve always assumed that we’re just applying one event at a time. But whenever we have spacelike-separated events, we can treat such events as “simultaneous”—and applied at the same point. And—just like in relativity theory—there are typically multiple possible choices of “simultaneity surfaces”. Each one corresponds to a certain foliation of our causal graph. And in the simple case we’re looking at here, there are only two possible (maximal) foliations:

\n
\n
\n

\n

From such foliations we can reconstruct possible total orderings of individual events just by enumerating possible permutations of events within each slice of the foliation (i.e. within each simultaneity surface). But we only really need a total ordering of events if we’re going to apply one event at a time. Yet the whole point is that we can view spacelike-separated events as being “simultaneous”. Or, in other words, we can view our system as “evolving in time”, with each “time step” corresponding to a successive slice in the foliation.

\n

And with this setup, we can reconstruct states that exist at each time step—interspersed by updates that may involve several “simultaneous” (spacelike-separated) events. In the case of the two foliations above, the resulting sequences of (“reconstructed”) states and updates are respectively:

\n
\n
\n

\n

As a more complicated example, consider recursively evaluating the Fibonacci number f[3] as above. Now the possible (maximal) foliations are:

\n
\n
\n

\n

For each of these foliations we can then reconstruct an explicit “time series” of states, interspersed by “updates” involving varying numbers of events:

\n
\n
Click to enlarge
\n

\n

So where in all these is the standard evaluation order? Well, it’s not explicitly here—because it involves doing a single event at a time, while all the foliations here are “maximal” in the sense that they aggregate as many events as they can into each spacelike slice. But if we don’t impose this maximality constraint, are there foliations that in a sense “cover” the standard evaluation order? Without the maximality constraint, there turn out in the example we’re using to be not 10 but 1249 possible foliations. And there are 4 that “cover” the standard (“depth-first”) evaluation order (indicated by a dashed red line):

\n
\n
\n

\n

(Only the last foliation here, in which every “slice” is just a single event, can strictly reproduce the standard evaluation order, but the others are all still “consistent with it”.)

\n

In the standard evaluation process, only a single event is ever done at a time. But what if instead one tries to do as many events as possible at a time? Well, that’s what our “maximal foliations” above are about. But one particularly notable case is what corresponds to a breadth-first scan of the causal graph. And this turns out to be covered by the very last maximal foliation we showed above.

\n

How this works may not be immediately obvious from the picture. With our standard layout for the causal graph, the path corresponding to the breadth-first scan is:

\n
\n
\n

\n

But if we lay out the causal graph differently, the path takes on the much-more-obviously-breadth-first form:

\n
\n
\n

\n

And now using this layout for the various configurations of foliations above we get:

\n
\n
\n

\n

We can think of different layouts for the causal graph as defining different “coordinatizations of spacetime”. If the vertical direction is taken to be time, and the horizontal direction space, then different layouts in effect place events at different positions in time and space. And with the layout here, the last foliation above is “flat”, in the sense that successive slices of the foliation can be thought of as directly corresponding to successive “steps in time”.

\n

In physics terms, different foliations correspond to different “reference frames”. And the “flat” foliation can be thought of as being like the cosmological rest frame, in which the observer is “at rest with respect to the universe”. In terms of states and events, we can also interpret this another way: we can say it’s the foliation in which in some sense the “largest possible number of events are being packed in at each step”. Or, more precisely, if at each step we scan from left to right, we’re doing every successive event that doesn’t overlap with events we’ve already done at this step:

\n
\n
\n

\n

And actually this also corresponds to what happens if, instead of using the built-in standard evaluator, we explicitly tell the Wolfram Language to repeatedly do replacements in expressions. To compare with what we’ve done above, we have to be a little careful in our definitions, using ⊕ and ⊖ as versions of + and – that have to get explicitly evaluated by other rules. But having done this, we get exactly the same sequence of “intermediate expressions” as in the flat (i.e. “breadth-first”) foliation above:

\n
\n
\n

\n

In general, different foliations can be thought of as specifying different “event-selection functions” to be applied to determine what events should occur at the next steps from any given state. At one extreme we can pick single-event-at-a-time event selection functions—and at the other extreme we can pick maximum-events-at-a-time event selection functions. In our Physics Project we have called the states obtained by applying maximal collections of events at a time “generational states”. And in effect these states represent the typical way we parse physical “spacetime”—in which we take in “all of space” at every successive moment of time. At a practical level the reason we do this is that the speed of light is somehow fast compared to the operation of our brains: if we look at our local surroundings (say the few hundred meters around us), light from these will reach us in a microsecond, while it takes our brains milliseconds to register what we’re seeing. And this makes it reasonable for us to think of there being an “instantaneous state of space” that we can perceive “all at once” at each particular “moment in time”.

\n

But what’s the analog of this when it comes to expression evaluation? We’ll discuss this a little more later. But suffice it to say here that it depends on who or what the “observer” of the process of evaluation is supposed to be. If we’ve got different elements of our states laid out explicitly in arrays, say in a GPU, then we might again “perceive all of space at once”. But if, for example, the data associated with states is connected through chains of pointers in memory or the like, and we “observe” this data only when we explicitly follow these pointers, then our perception won’t as obviously involve something we can think of as “bulk space”. But by thinking in terms of foliations (or reference frames) as we have here, we can potentially fit what’s going on into something like space, that seems familiar to us. Or, put another way, we can imagine in effect “programming in a certain reference frame” in which we can aggregate multiple elements of what’s going on into something we can consider as an analog of space—thereby making it familiar enough for us to understand and reason about.

\n

Multiway Evaluation and Multiway Graphs

\n

We can view everything we’ve done so far as dissecting and reorganizing the standard evaluation process. But let’s say we’re just given certain underlying rules for transforming expressions—and then we apply them in all possible ways. It’ll give us a “multiway” generalization of evaluation—in which instead of there being just one path of history, there are many. And in our Physics Project, this is exactly how the transition from classical to quantum physics works. And as we proceed here, we’ll see a close correspondence between multiway evaluation and quantum processes.

\n

But let’s start again with our expression (1 + (2 + 2)) + (3 + 4), and consider all possible ways that individual integer addition “events” can be applied to evaluate this expression. In this particular case, the result is pretty simple, and can be represented by a tree that branches in just two places:

\n
\n
\n

\n

But one thing to notice here is that even at the first step there’s an event that we’ve never seen before. It’s something that’s possible if we apply integer addition in all possible places. But when we start from the standard evaluation process, the basic event just never appears with the “expression context” we’re seeing it in here.

\n

Each branch in the tree above in some sense represents a different “path of history”. But there’s a certain redundancy in having all these separate pathsbecause there are multiple instances of the same expression that appear in different places. And if we treat these as equivalent and merge them we now get:

\n
\n
\n

\n

(The question of “state equivalence” is a subtle one, that ultimately depends on the operation of the observer, and how the observer constructs their perception of what’s going on. But for our purposes here, we’ll treat expressions as equivalent if they are structurally the same, i.e. every instance of or of 5 is “the same” or 5.)

\n

If we now look only at states (i.e. expressions) we’ll get a multiway graph, of the kind that’s appeared in our Physics Project and in many applications of concepts from it:

\n
\n
\n

\n

This graph in a sense gives a succinct summary of possible paths of history, which here correspond to possible evaluation paths. The standard evaluation process corresponds to a particular path in this multiway graph:

\n
\n
\n

\n

What about a more complicated case? For example, what is the multiway graph for our recursive computation of Fibonacci numbers? As we’ll discuss at more length below, in order to make sure every branch of our recursive evaluation terminates, we have to give a slightly more careful definition of our function f:

\n
\n
\n

\n

But now here’s the multiway tree for the evaluation of f[2]:

\n
\n
\n

\n

And here’s the corresponding multiway graph:

\n
\n
\n

\n

The leftmost branch in the multiway tree corresponds to the standard evaluation process; here’s the corresponding path in the multiway graph:

\n
\n
\n

\n

Here’s the structure of the multiway graph for the evaluation of f[3]:

\n
\n
\n

\n

Note that (as we’ll discuss more later) all the possible evaluation paths in this case lead to the same final expression, and in fact in this particular example all the paths are of the same length (12 steps, i.e. 12 evaluation events).

\n

In the multiway graphs we’re drawing here, every edge in effect corresponds to an evaluation event. And we can imagine setting up foliations in the multiway graph that divide these events into slices. But what is the significance of these slices? When we did the same kind of thing above for causal graphs, we could interpret the slices as representing “instantaneous states laid out in space”. And by analogy we can interpret a slice in the multiway graph as representing “instantaneous states laid out across branches of history”. In the context of our Physics Project, we can then think of these slices as being like superpositions in quantum mechanics, or states “laid out in branchial space”. And, as we’ll discuss later, just as we can think of elements laid out in “space” as corresponding in the Wolfram Language to parts in a symbolic expression (like a list, a sum, etc.), so now we’re dealing with a new kind of way of aggregating states across branchial space, that has to be represented with new language constructs.

\n

But let’s return to the very simple case of (1 + (2 + 2)) + (3 + 4). Here’s a more complete representation of the multiway evaluation process in this case, including both all the events involved, and the causal relations between them:

\n
\n
\n

\n

The “single-way” evaluation process we discussed above uses only part of this:

\n
\n
\n

\n

And from this part we can pull out the causal relations between events to reproduce the (“single-way”) causal graph we had before. But what if we pull out all the causal relations in our full graph?

\n
\n
\n

\n

What we then have is the multiway causal graph. And from foliations of this, we can construct possible histories—though now they’re multiway histories, with the states at particular time steps now being what amount to superposition states.

\n

In the particular case we’re showing here, the multiway causal graph has a very simple structure, consisting essentially just of a bunch of isomorphic pieces. And as we’ll see later, this is an inevitable consequence of the nature of the evaluation we’re doing here, and its property of causal invariance (and in this case, confluence).

\n

Branchlike Separation

\n

Although what we’ve discussed has already been somewhat complicated, there’s actually been a crucial simplifying assumption in everything we’ve done. We’ve assumed that different transformations on a given expression can never apply to the same part of the expression. Different transformations can apply to different parts of the same expression (corresponding to spacelike-separated evaluation events). But there’s never been a “conflict” between transformations, where multiple transformations can apply to the same part of the same expression.

\n

So what happens if we relax this assumption? In effect it means that we can generate different “incompatible” branches of history—and we can characterize the events that produce this as “branchlike separated”. And when such branchlike-separated events are applied to a given state, they’ll produce multiple states which we can characterize as “separated in branchial space”, but nevertheless correlated as a result of their “common ancestry”—or, in quantum mechanics terms, “entangled”.

\n

As a very simple first example, consider the rather trivial function f defined by

\n
\n
\n

\n

If we evaluate f[f[0]] (for any f) there are immediately two “conflicting” branches: one associated with evaluation of the “outer f”, and one with evaluation of the “inner f”:

\n
\n
\n

\n

We can indicate branchlike-separated pairs of events by a dashed line:

\n
\n
\n

\n

Adding in causal edges, and merging equivalent states, we get:

\n
\n
\n

\n

We see that some events are causally related. The first two events are not—but given that they involve overlapping transformations they are “branchially related” (or, in effect, entangled).

\n

Evaluating the expression f[f[0]+1] gives a more complicated graph, with two different instances of branchlike-separated events:

\n
\n
\n

\n

Extracting the multiway states graph we get

\n
\n
\n

\n

where now we have indicated “branchially connected” states by pink “branchial edges”. Pulling out only these branchial edges then gives the (rather trivial) branchial graph for this evaluation process:

\n
\n
\n

\n

There are many subtle things going on here, particularly related to the treelike structure of expressions. We’ve talked about separations between events: timelike, spacelike and branchlike. But what about separations between elements of an expression? In something like {f[0], f[0], f[0]} it’s reasonable to extend our characterization of separations between events, and say that the f[0]’s in the expression can themselves be considered spacelike separated. But what about in something like f[f[0]]? We can say that the f[_]’s here “overlap”—and “conflict” when they are transformed—making them branchlike separated. But the structure of the expression also inevitably makes them “treelike separated”. We’ll see later how to think about the relation between treelike-separated elements in more fundamental terms, ultimately using hypergraphs. But for now an obvious question is what in general the relation between branchlike-separated elements can be.

\n

And essentially the answer is that branchlike separation has to “come with” some other form of separation: spacelike, treelike, rulelike, etc. Rulelike separation involves having multiple rules for the same object (e.g. a rule as well as )—and we’ll talk about this later. With spacelike separation, we basically get branchlike separation when subexpressions “overlap”. This is fairly subtle for tree-structured expressions, but is much more straightforward for strings, and indeed we have discussed this case extensively in connection with our Physics Project.

\n

Consider the (rather trivial) string rewriting rule:

\n
\n
\n

\n

Applying this rule to AAAAAA we get:

\n
\n
\n

\n

Some of the events here are purely spacelike separated, but whenever the characters they involve overlap, they are also branchlike separated (as indicated by the dashed pink lines). Extracting the multiway states graph we get:

\n
\n
\n

\n

And now we get the following branchial graph:

\n
\n
\n

\n

So how can we see analogs in expression evaluation? It turns out that combinators provide a good example (and, yes, it’s quite remarkable that we’re using combinators here to help explain something—given that combinators almost always seem like the most obscure and difficult-to-explain things around). Define the standard S and K combinators:

\n
\n
\n

\n
\n
\n

\n

Now we have for example

\n
\n
\n

\n

where there are many spacelike-separated events, and a single pair of branchlike + treelike-separated ones. With a slightly more complicated initial expression, we get the rather messy result

\n
\n
\n

\n

now with many branchlike-separated states:

\n
\n
\n

\n

Rather than using the full standard S, K combinators, we can consider a simpler combinator definition:

\n
\n
\n

\n

Now we have for example

\n
\n
\n

\n

where the branchial graph is

\n
\n
\n

\n

and the multiway causal graph is:

\n
\n
\n

\n

The expression f[f[f][f]][f] gives a more complicated multiway graph

\n
\n
\n

\n

and branchial graph:

\n
\n
\n

\n

Interpretations, Analogies and the Concept of Multi

\n

Before we started talking about branchlike separation, the only kinds of separation we considered were timelike and spacelike. And in this case we were able to take the causal graphs we got, and set up foliations of them where each slice could be thought of as representing a sequential step in time. In effect, what we were doing was to aggregate things so that we could talk about what happens in “all of space” at a particular time.

\n

But when there’s branchlike separation we can no longer do this. Because now there isn’t a single, consistent “configuration of all of space” that can be thought of as evolving in a single thread through time. Rather, there are “multiple threads of history” that wind their way through the branchings (and mergings) that occur in the multiway graph. One can make foliations in the multiway graph—much like one does in the causal graph. (More strictly, one really needs to make the foliations in the multiway causal graph—but these can be “inherited” by the multiway graph.)

\n

In physics terms, the (single-way) causal graph can be thought of as a discrete version of ordinary spacetime—with a foliation of it specifying a “reference frame” that leads to a particular identification of what one considers space, and what time. But what about the multiway causal graph? In effect, we can imagine that it defines a new, branchial “direction”, in addition to the spatial direction. Projecting in this branchial direction, we can then think of getting a kind of branchial analog of spacetime that we can call branchtime. And when we construct the multiway graph, we can basically imagine that it’s a representation of branchtime.

\n

A particular slice of a foliation of the (single-way) causal graph can be thought of as corresponding to an “instantaneous state of (ordinary) space”. So what does a slice in a foliation of the multiway graph represent? It’s effectively a branchial or multiway combination of states—a collection of states that can somehow all exist “at the same time”. And in physics terms we can interpret it as a quantum superposition of states.

\n

But how does all this work in the context of expressions? The parts of a single expression like a + b + c + d or {a, b, c, d} can be thought of being spacelike separated, or in effect “laid out in space”. But what kind of a thing has parts that are “laid out in branchial space”? It’s a new kind of fundamentally multiway construct. We’re not going to explore it too much here, but in the Wolfram Language we might in future call it Multi. And just as {a, b, c, d} (or List[a, b, c, d]) can be thought of as representing a, b, c, d “laid out in space”, so now Multi[a, b, c, d] would represent a, b, c, d “laid out branchial space”.

\n

In ordinary evaluation, we just generate a specific sequence of individual expressions. But in multiway evaluation, we can imagine that we generate a sequence of Multi objects. In the examples we’ve seen so far, we always eventually get a Multi containing just a single expression. But we’ll soon find out that that’s not always how things work, and we can perfectly well end up with a Multi containing multiple expressions.

\n

So what might we do with a Multi? In a typical “nondeterministic computation” we probably want to ask: “Does the Multi contain some particular expression or pattern that we’re looking for?” If we imagine that we’re doing a “probabilistic computation” we might want to ask about the frequencies of different kinds of expressions in the Multi. And if we’re doing quantum computation with the normal formalism of quantum mechanics, we might want to tag the elements of the Multi with “quantum amplitudes” (that, yes, in our model presumably have magnitudes determined by path counting in the multiway graph, and phases representing the “positions of elements in branchial space”). And in a traditional quantum measurement, the concept would typically be to determine a projection of a Multi, or in effect an inner product of Multi objects. (And, yes, if one knows only that projection, it’s not going to be enough to let one unambiguously continue the “multiway computation”; the quantum state has in effect been “collapsed”.)

\n

Is There Always a Definite Result?

\n

For an expression like (1 + (2 + 2)) + (3 + 4) it doesn’t matter in what order one evaluates things; one always gets the same result—so that the corresponding multiway graph leads to just a single final state:

\n
\n
\n

\n

But it’s not always true that there’s a single final state. For example, with the definitions

\n
\n
\n

\n
\n
\n

\n

standard evaluation in the Wolfram Language gives the result 0 for f[f[0]] but the full multiway graph shows that (with a different evaluation order) it’s possible instead to get the result g[g[0]]:

\n
\n
\n

\n

And in general when a certain collection of rules (or definitions) always leads to just a single result, one says that the collection of rules is confluent; otherwise it’s not. Pure arithmetic turns out to be confluent. But there are plenty of examples (e.g. in string rewriting) that are not. Ultimately a failure of confluence must come from the presence of branchlike separation—or in effect a conflict between behavior on two different branches. And so in the example above we see that there are branchlike-separated “conflicting” events that never resolve—yielding two different final outcomes:

\n
\n
\n

\n

As an even simpler example, consider the definitions and . In the Wolfram Language these definitions immediately overwrite each other. But assume they could both be applied (say through explicit , rules). Then there’s a multiway graph with two “unresolved” branches—and two outcomes:

\n
\n
\n

\n

For string rewriting systems, it’s easy to enumerate possible rules. The rule

\n
\n
\n

\n

(that effectively sorts the elements in the string) is confluent:

\n
\n
\n

\n

But the rule

\n
\n
\n

\n

is not confluent

\n
\n
\n

\n

and “evaluates” BABABA to four distinct outcomes:

\n
\n
\n

\n

These are all cases where “internal conflicts” lead to multiple different final results. But another way to get different results is through “side effects”. Consider first setting x = 0 then evaluating {x = 1, x + 1}:

\n
\n
\n

\n

If the order of evaluation is such that x + 1 is evaluated before x = 1 it will give 1, otherwise it will give 2, leading to the two different outcomes {1, 1} and {1, 2}. In some ways this is like the example above where we had two distinct rules: and . But there’s a difference. While explicit rules are essentially applied only “instantaneously”, an assignment like x = 1 has a “permanent” effect, at least until it is “overwritten” by another assignment. In an evaluation graph like the one above we’re showing particular expressions generated during the evaluation process. But when there are assignments, there’s an additional “hidden state” that in the Wolfram Language one can think of as corresponding to the state of the global symbol table. If we included this, then we’d again see rules that apply “instantaneously”, and we’d be able to explicitly trace causal dependencies between events. But if we elide it, then we effectively hide the causal dependence that’s “carried” by the state of the symbol table, and the evaluation graphs we’ve been drawing are necessarily somewhat incomplete.

\n

Computations That Never End

\n

The basic operation of the Wolfram Language evaluator is to keep doing transformations until the result no longer changes (or, in other words, until a fixed point is reached). And that’s convenient for being able to “get a definite answer”. But it’s rather different from what one usually imagines happens in physics. Because in that case we’re typically dealing with things that just “keep progressing through time”, without ever getting to any fixed point. (“Spacetime singularities”, say in black holes, do for example involve reaching fixed points where “time has come to an end”.)

\n

But what happens in the Wolfram Language if we just type , without giving any value to ? The Wolfram Language evaluator will keep evaluating this, trying to reach a fixed point. But it’ll never get there. And in practice it’ll give a message, and (at least in Version 13.3 and above) return a TerminatedEvaluation object:

\n
\n
\n

\n

What’s going on inside here? If we look at the evaluation graph, we can see that it involves an infinite chain of evaluation events, that progressively “extrude” +1’s:

\n
\n
\n

\n

A slightly simpler case (that doesn’t raise questions about the evaluation of Plus) is to consider the definition

\n
\n
\n

\n

which has the effect of generating an infinite chain of progressively more “f-nested” expressions:

\n
\n
\n

\n

Let’s say we define two functions:

\n
\n
\n

\n
\n
\n

\n

Now we don’t just get a simple chain of results; instead we get an exponentially growing multiway graph:

\n
\n
\n

\n

In general, whenever we have a recursive definition (say of f in terms of f or x in terms of x) there’s the possibility of an infinite process of evaluation, with no “final fixed point”. There are of course specific cases of recursive definitions that always terminate—like the Fibonacci example we gave above. And indeed when we’re dealing with so-called “primitive recursion” this is how things inevitably work: we’re always “systematically counting down” to some defined base case (say f[1] = 1).

\n

When we look at string rewriting (or, for that matter, hypergraph rewriting), evolution that doesn’t terminate is quite ubiquitous. And in direct analogy with, for example, the string rewriting rule ABBB, BBA we can set up the definitions

\n
\n
\n

\n
\n
\n

\n

and then the (infinite) multiway graph begins:

\n
\n
\n

\n

One might think that the possibility of evaluation processes that don’t terminate would be a fundamental problem for a system set up like the Wolfram Language. But it turns out that in current normal usage one basically never runs into the issue except by mistake, when there’s a bug in one’s program.

\n

Still, if one explicitly wants to generate an infinite evaluation structure, it’s not hard to do so. Beyond one can define

\n
\n
\n

\n

and then one gets the multiway graph

\n
\n
\n

\n

which has CatalanNumber[t] (or asymptotically ~4t) states at layer t.

\n

Another “common bug” form of non-terminating evaluation arises when one makes a primitive-recursion-style definition without giving a “boundary condition”. Here, for example, is the Fibonacci recursion without f[0] and f[1] defined:

\n
\n
\n

\n

And in this case the multiway graph is infinite

\n
\n
\n

\n

with ~2t states at layer t.

\n

But consider now the “unterminated factorial recursion”

\n
\n
\n

\n

On its own, this just leads to a single infinite chain of evaluation

\n
\n
\n

\n

but if we add the explicit rule that multiplying anything by zero gives zero (i.e. 0 _ → 0) then we get

\n
\n
\n

\n

in which there’s a “zero sink” in addition to an infinite chain of f[–n] evaluations.

\n

Some definitions have the property that they provably always terminate, though it may take a while. An example is the combinator definition we made above:

\n
\n
\n

\n

Here’s the multiway graph starting with f[f[f][f]][f], and terminating in at most 10 steps:

\n
\n
\n

\n

Starting with f[f[f][f][f][f]][f] the multiway graph becomes

\n
\n
\n

\n

but again the evaluation always terminates (and gives a unique result). In this case we can see why this happens: at each step f[x_][y_] effectively “discards ”, thereby “fundamentally getting smaller”, even as it “puffs up” by making three copies of .

\n

But if instead one uses the definition

\n
\n
\n

\n

things get more complicated. In some cases, the multiway evaluation always terminates

\n
\n
\n

\n

while in others, it never terminates:

\n
\n
\n

\n

But then there are cases where there is sometimes termination, and sometimes not:

\n
\n
\n

\n

In this particular case, what’s happening is that evaluation of the first argument of the “top-level f” never terminates, but if the top-level f is evaluated before its arguments then there’s immediate termination. Since the standard Wolfram Language evaluator evaluates arguments first (“leftmost-innermost evaluation”), it therefore won’t terminate in this case—even though there are branches in the multiway evaluation (corresponding to “outermost evaluation”) that do terminate.

\n

Transfinite Evaluation

\n

If a computation reaches a fixed point, we can reasonably say that that’s the “result” of the computation. But what if the computation goes on forever? Might there still be some “symbolic” way to represent what happens—that for example allows one to compare results from different infinite computations?

\n

In the case of ordinary numbers, we know that we can define a “symbolic infinity” ∞ (Infinity in Wolfram Language) that represents an infinite number and has all the obvious basic arithmetic properties:

\n
\n
\n

\n

But what about infinite processes, or, more specifically, infinite multiway graphs? Is there some useful symbolic way to represent such things? Yes, they’re all “infinite”. But somehow we’d like to distinguish between infinite graphs of different forms, say:

\n
\n
\n

\n

And already for integers, it’s been known for more than a century that there’s a more detailed way to characterize infinities than just referring to them all as ∞: it’s to use the idea of transfinite numbers. And in our case we can imagine successively numbering the nodes in a multiway graph, and seeing what the largest number we reach is. For an infinite graph of the form

\n
\n
\n

\n

(obtained say from x = x + 1 or x = {x}) we can label the nodes with successive integers, and we can say that the “largest number reached” is the transfinite ordinal ω.

\n

A graph consisting of two infinite chains is then characterized by 2ω, while an infinite 2D grid is characterized by ω2, and an infinite binary tree is characterized by 2ω.

\n

What about larger numbers? To get to ωω we can use a rule like

\n
\n
\n

\n

that effectively yields a multiway graph that corresponds to a tree in which successive layers have progressively larger numbers of branches:

\n
\n
\n

\n

One can think of a definition like x = x + 1 as setting up a “self-referential data structure”, whose specification is finite (in this case essentially a loop), and where the infinite evaluation process arises only when one tries to get an explicit value out of the structure. More elaborate recursive definitions can’t, however, readily be thought of as setting up straightforward self-referential data structures. But they still seem able to be characterized by transfinite numbers.

\n

In general many multiway graphs that differ in detail will be associated with a given transfinite number. But the expectation is that transfinite numbers can potentially provide robust characterizations of infinite evaluation processes, with different constructions of the “same evaluation” able to be identified as being associated with the same canonical transfinite number.

\n

Most likely, definitions purely involving pattern matching won’t be able to generate infinite evaluations beyond ε0 = ωωω...—which is also the limit of where one can reach with proofs based on ordinary induction, Peano Arithmetic, etc. It’s perfectly possible to go further—but one needs to explicitly use functions like NestWhile etc. in the definitions that are given.

\n

And there’s another issue as well: given a particular set of definitions, there’s no limit to how difficult it can be to determine the ultimate multiway graph that’ll be produced. In the end this is a consequence of computational irreducibility, and of the undecidability of the halting problem, etc. And what one can expect in the end is that some infinite evaluation processes one will be able to prove can be characterized by particular transfinite numbers, but others one won’t be able to “tie down” in this way—and in general, as computational irreducibility might suggest, won’t ever allow one to give a “finite symbolic summary”.

\n

The Question of the Observer

\n

One of the key lessons of our Physics Project is the importance of the character of the observer in determining what one “takes away” from a given underlying system. And in setting up the evaluation process—say in the Wolfram Language—the typical objective is to align with the way human observers expect to operate. And so, for example, one normally expects that one will give an expression as input, then in the end get an expression as output. The process of transforming input to output is analogous to the doing of a calculation, the answering of a question, the making of a decision, the forming of a response in human dialog, and potentially the forming of a thought in our minds. In all of these cases, we treat there as being a certain “static” output.

\n

It’s very different from the way physics operates, because in physics “time always goes on”: there’s (essentially) always another step of computation to be done. In our usual description of evaluation, we talk about “reaching a fixed point”. But an alternative would be to say that we reach a state that just repeats unchanged forever—but we as observers equivalence all those repeats, and think of it as having reached a single, unchanging state.

\n

Any modern practical computer also fundamentally works much more like physics: there are always computational operations going on—even though those operations may end up, say, continually putting the exact same pixel in the same place on the screen, so that we can “summarize” what’s going on by saying that we’ve reached a fixed point.

\n

There’s much that can be done with computations that reach fixed points, or, equivalently with functions that return definite values. And in particular it’s straightforward to compose such computations or functions, continually taking output and then feeding it in as input. But there’s a whole world of other possibilities that open up once one can deal with infinite computations. As a practical matter, one can treat such computations “lazily”—representing them as purely symbolic objects from which one can derive particular results if one explicitly asks to do so.

\n

One kind of result might be of the type typical in logic programming or automated theorem proving: given a potentially infinite computation, is it ever possible to reach a specified state (and, if so, what is the path to do so)? Another type of result might involve extracting a particular “time slice” (with some choice of foliation), and in general representing the result as a Multi. And still another type of result (reminiscent of “probabilistic programming”) might involve not giving an explicit Multi, but rather computing certain statistics about it.

\n

And in a sense, each of these different kinds of results can be thought of as what’s extracted by a different kind of observer, who is making different kinds of equivalences.

\n

We have a certain typical experience of the physical world that’s determined by features of us as observers. For example, as we mentioned above, we tend to think of “all of space” progressing “together” through successive moments of time. And the reason we think this is that the regions of space we typically see around us are small enough that the speed of light delivers information on them to us in a time that’s short compared to our “brain processing time”. If we were bigger or faster, then we wouldn’t be able to think of what’s happening in all of space as being “simultaneous” and we’d immediately be thrust into issues of relativity, reference frames, etc.

\n

And in the case of expression evaluation, it’s very much the same kind of thing. If we have an expression laid out in computer memory (or across a network of computers), then there’ll be a certain time to “collect information spatially from across the expression”, and a certain time that can be attributed to each update event. And the essence of array programming (and much of the operation of GPUs) is that one can assume—like in the typical human experience of physical space—that “all of space” is being updated “together”.

\n

But in our analysis above, we haven’t assumed this, and instead we’ve drawn causal graphs that explicitly trace dependencies between events, and show which events can be considered to be spacelike separated, so that they can be treated as “simultaneous”.

\n

We’ve also seen branchlike separation. In the physics case, the assumption is that we as observers sample in an aggregated way across extended regions in branchial space—just as we do across extended regions in physical space. And indeed the expectation is that we encounter what we describe as “quantum effects” precisely because we are of limited extent in branchial space.

\n

In the case of expression evaluation, we’re not used to being extended in branchial space. We typically imagine that we’ll follow some particular evaluation path (say, as defined by the standard Wolfram Language evaluator), and be oblivious to other paths. But, for example, strategies like speculative execution (typically applied at the hardware level) can be thought of as representing extension in branchial space.

\n

And at a theoretical level, one certainly thinks of different kinds of “observations” in branchial space. In particular, there’s nondeterministic computation, in which one tries to identify a particular “thread of history” that reaches a given state, or a state with some property one wants.

\n

One crucial feature of observers like us is that we are computationally bounded—which puts limitations on the kinds of observations we can make. And for example computational irreducibility then limits what we can immediately know (and aggregate) about the evolution of systems through time. And similarly multicomputational irreducibility limits what we can immediately know (and aggregate) about how systems behave across branchial space. And insofar as any computational devices we build in practice must be ones that we as observers can deal with, it’s inevitable that they’ll be subject to these kinds of limitations. (And, yes, in talking about quantum computers there tends to be an implicit assumption that we can in effect overcome multicomputational irreducibility, and “knit together” all the different computational paths of history—but it seems implausible that observers like us can actually do this, or can in general derive definite results without expending computationally irreducible effort.)

\n

One further small comment about observers concerns what in physics are called closed timelike curves—essentially loops in time. Consider the definition:

\n
\n
\n

\n

This gives for example the multiway graph:

\n
\n
\n

\n

One can think of this as connecting the future to the past—something that’s sometimes interpreted as “allowing time travel”. But really this is just a more (time-)distributed version of a fixed point. In a fixed point, a single state is constantly repeated. Here a sequence of states (just two in the example given here) get visited repeatedly. The observer could treat these states as continually repeating in a cycle, or could coarse grain and conclude that “nothing perceptible is changing”.

\n

In spacetime we think of observers as making particular choices of simultaneity surfaces—or in effect picking particular ways to “parse” the causal graph of events. In branchtime the analog of this is that observers pick how to parse the multiway graph. Or, put another way, observers get to choose a path through the multiway graph, corresponding to a particular evaluation order or evaluation scheme. In general, there is a tradeoff between the choices made by the observer, and the behavior generated by applying the rules of the system.

\n

But if the observer is computationally bounded, they cannot overcome the computational irreducibility—or multicomputational irreducibility—of the behavior of the system. And as a result, if there is complexity in the detailed behavior of the system, the observer will not be able to avoid it at a detailed level by the choices they make. Though a critical idea of our Physics Project is that by appropriate aggregation, the observer will detect certain aggregate features of the system, that have robust characteristics independent of the underlying details. In physics, this represents a bulk theory suitable for the perception of the universe by observers like us. And presumably there is an analog of this in expression evaluation. But insofar as we’re only looking at the evaluation of expressions we’ve engineered for particular computational purposes, we’re not yet used to seeing “generic bulk expression evaluation”.

\n

But this is exactly what we’ll see if we just go out and run “arbitrary programs”, say found by enumerating certain classes of programs (like combinators or multiway Turing machines). And for observers like us these will inevitably “seem very much like physics”.

\n

The Tree Structure of Expressions

\n

Although we haven’t talked about this so far, any expression fundamentally has a tree structure. So, for example, (1 + (2 + 2)) + (3 + 4) is represented—say internally in the Wolfram Language—as the tree:

\n
\n
\n

\n

So how does this tree structure interact with the process of evaluation? In practice it means for example that in the standard Wolfram Language evaluator there are two different kinds of recursion going on. The first is the progressive (“timelike”) reevaluation of subexpressions that change during evaluation. And the second is the (“spacelike” or “treelike”) scanning of the tree.

\n

In what we’ve discussed above, we’ve focused on evaluation events and their relationships, and in doing so we’ve concentrated on the first kind of recursion—and indeed we’ve often elided some of the effects of the second kind by, for example, immediately showing the result of evaluating Plus[2, 2] without showing more details of how this happens.

\n

But here now is a more complete representation of what’s going on in evaluating this simple expression:

\n
\n
\n

\n

The solid gray lines in this “trace graph” indicate the subparts of the expression tree at each step. The dashed gray lines indicate how these subparts are combined to make expressions. And the red lines indicate actual evaluation events where rules (either built in or specified by definitions) are applied to expressions.

\n

It’s possible to read off things like causal dependence between events from the trace graph. But there’s a lot else going on. Much of it is at some level irrelevant—because it involves recursing into parts of the expression tree (like the head Plus) where no evaluation events occur. Removing these parts we then get an elided trace graph in which for example the causal dependence is clearer:

\n
\n
\n

\n

Here’s the trace graph for the evaluation of f[5] with the standard recursive Fibonacci definition

\n
\n
\n

\n

and here’s its elided form:

\n
\n
\n

\n

At least when we discussed single-way evaluation above, we mostly talked about timelike and spacelike relations between events. But with tree-structured expressions there are also treelike relations.

\n

Consider the rather trivial definition

\n
\n
\n

\n

and look at the multiway graph for the evaluation of f[f[0]]:

\n
\n
\n

\n

What is the relation between the event on the left branch, and the top event on the right branch? We can think of them as being treelike separated. The event on the left branch transforms the whole expression tree. But the event on the right branch just transforms a subexpression.

\n

Spacelike-separated events affect disjoint parts in an expression (i.e. ones on distinct branches of the expression tree). But treelike-separated events affect nested parts of an expression (i.e. ones that appear on a single branch in the expression tree). Inevitably, treelike-separated events also have a kind of one-way branchlike separation: if the “higher event” in the tree happens, the “lower one” cannot.

\n

In terms of Wolfram Language part numbers, spacelike-separated events affect parts with disjoint numbers, say {2, 5} and {2, 8}. But treelike-separated events affect parts with overlapping sequences of part numbers, say {2} and {2, 5} or {2, 5} and {2, 5, 1}.

\n

In our Physics Project there’s nothing quite like treelike relations built in. The “atoms of space” are related by a hypergraph—without any kind of explicit hierarchical structure. The hypergraph can take on what amounts to a hierarchical structure, but the fundamental transformation rules won’t intrinsically take account of this.

\n

The hierarchical structure of expressions is incredibly important in their practical use—where it presumably leverages the hierarchical structure of human language, and of ways we talk about the world:

\n
\n
\n

\n

We’ll see soon below that we can in principle represent expressions without having hierarchical structure explicitly built in. But in almost all uses of expressions—say in Wolfram Language—we end up needing to have hierarchical structure.

\n

If we were only doing single-way evaluation the hierarchical structure of expressions would be important in determining the order of evaluation to be used, but it wouldn’t immediately enmesh with core features of the evaluation process. But in multiway evaluation “higher” treelike-separated events can in effect cut off the evaluation histories of “lower” ones—and so it’s inevitably central to the evaluation process. For spacelike- and branchlike-separated events, we can always choose different reference frames (or different spacelike or branchlike surfaces) that arrange the events differently. But treelike-separated events—a little like timelike-separated ones—have a certain forced relationship that cannot be affected by an observer’s choices.

\n

Grinding Everything Down to Hypergraphs

\n

To draw causal graphs—and in fact to do a lot of what we’ve done here—we need to know “what depends on what”. And with our normal setup for expressions this can be quite subtle and complicated. We apply the rule to to give the result . But does the a that “comes out” depend on the a that went in, or is it somehow something that’s “independently generated”? Or, more extremely, in a transformation like , to what extent is it “the same 1” that goes in and comes out? And how do these issues of dependence work when there are the kinds of treelike relations discussed in the previous section?

\n

The Wolfram Language evaluator defines how expressions should be evaluated—but doesn’t immediately specify anything about dependencies. Often we can look “after the fact” and deduce what “was involved” and what was not—and thus what should be considered to depend on what. But it’s not uncommon for it to be hard to know what to say—forcing one to make what seem likely arbitrary decisions. So is there any way to avoid this, and to set things up so that dependency becomes somehow “obvious”?

\n

It turns out that there is—though, perhaps not surprisingly, it comes with difficulties of its own. But the basic idea is to go “below expressions”, and to “grind everything down” to hypergraphs whose nodes are ultimate direct “carriers” of identity and dependency. It’s all deeply reminiscent of our Physics Project—and its generalization in the ruliad. Though in those cases the individual elements (or “emes” as we call them) exist far below the level of human perception, while in the hypergraphs we construct for expressions, things like symbols and numbers appear directly as emes.

\n

So how can we “compile” arbitrary expressions to hypergraphs? In the Wolfram Language something like a + b + c is the “full-form” expression

\n
\n
\n

\n

which corresponds to the tree:

\n
\n
\n

\n

And the point is that we can represent this tree by a hypergraph:

\n
\n
\n

\n

Plus, a, b and c appear directly as “content nodes” in the hypergraph. But there are also “infrastructure nodes” (here labeled with integers) that specify how the different pieces of content are “related”—here with a 5-fold hyperedge representing Plus with three arguments. We can write this hypergraph out in “symbolic form” as:

\n
\n
\n

\n

Let’s say instead we have the expression or Plus[a, Plus[b, c]], which corresponds to the tree:

\n
\n
\n

\n

We can represent this expression by the hypergraph

\n
\n
\n

\n

which can be rendered visually as:

\n
\n
\n

\n

What does evaluation do to such hypergraphs? Essentially it must transform collections of hyperedges into other collections of hyperedges. So, for example, when x_ + y_ is evaluated, it transforms a set of 3 hyperedges to a single hyperedge according to the rule:

\n
\n
\n

\n

(Here the list on the left-hand side represents three hyperedges in any order—and so is effectively assumed to be orderless.) In this rule, the literal Plus acts as a kind of key to determine what should happen, while the specific patterns define how the input and output expressions should be “knitted together”.

\n

So now let’s apply this rule to the expression 10 + (20 + 30). The expression corresponds to the hypergraph

\n
\n
\n

\n

where, yes, there are integers both as content elements, and as labels or IDs for “infrastructure nodes”. The rule operates on collections of hyperedges, always consuming 3 hyperedges, and generating 1. We can think of the hyperedges as “fundamental tokens”. And now we can draw a token-event graph to represent the evaluation process:

\n
\n
\n

\n

Here’s the slightly more complicated case of (10 + (20 + 20)) + (30 + 40):

\n
\n
\n

\n

But here now is the critical point. By looking at whether there are emes in common from one event to another, we can determine whether there is dependency between those events. Emes are in a sense “atoms of existence” that maintain a definite identity, and immediately allow one to trace dependency.

\n

So now we can fill in causal edges, with each edge labeled by the emes it “carries”:

\n
\n
\n

\n

Dropping the hyperedges, and adding in an initial “Big Bang” event, we get the (multiway) causal graph:

\n
\n
\n

\n

We should note that in the token-event graph, each expression has been “shattered” into its constituent hyperedges. Assembling the tokens into recognizable expressions effectively involves setting up a particular foliation of the token-event graph. But if we do this, we get a multiway graph expressed in terms of hypergraphs

\n
\n
\n

\n

or in visual form:

\n
\n
\n

\n

As a slightly more complicated case, consider the recursive computation of the Fibonacci number f[2]. Here is the token-event graph in this case:

\n
\n
\n

\n

And here is the corresponding multiway causal graph, labeled with the emes that “carry causality”:

\n
\n
\n

\n

Every kind of expression can be “ground down” in some way to hypergraphs. For strings, for example, it’s convenient to make a separate token out of every character, so that “ABBAAA” can be represented as:

\n
\n
\n

\n

It’s interesting to note that our hypergraph setup can have a certain similarity to machine-level representations of expressions, with every eme in effect corresponding to a pointer to a certain memory location. Thus, for example, in the representation of the string, the infrastructure emes define the pointer structure for a linked list—with the content emes being the “payloads” (and pointing to globally shared locations, like ones for A and B).

\n

Transformations obtained by applying rules can then be thought of as corresponding just to rearranging pointers. Sometimes “new emes” have to be created, corresponding to new memory being allocated. We don’t have an explicit way to “free” memory. But sometimes some part of the hypergraph will become disconnected—and one can then imagine disconnected pieces to which the observer is not attached being garbage collected.

\n

The Rulial Case

\n

So far we’ve discussed what happens in the evaluation of particular expressions according to particular rules (where those rules could just be all the ones that are built into Wolfram Language). But the concept of the ruliad suggests thinking about all possible computations—or, in our terms here, all possible evaluations. Instead of particular expressions, we are led to think about evaluating all possible expressions. And we are also led to think about using all possible rules for these evaluations.

\n

As one simple approach to this, instead of looking, for example, at a single combinator definition such as

\n
\n
\n

\n

used to evaluate a single expression such as

\n
\n
\n

\n

we can start enumerating all possible combinator rules

\n
\n
\n

\n

and apply them to evaluate all possible expressions:

\n
\n
\n

\n

Various new phenomena show up here. For example, there is now immediately the possibility of not just spacelike and branchlike separation, but also what we can call rulelike separation.

\n

In a trivial case, we could have rules like

\n
\n
\n

\n

and then evaluating x will lead to two events which we can consider rulelike separated:

\n
\n
\n

\n

In the standard Wolfram Language system, the definitions and x = b would overwrite each other. But if we consider rulial multiway evaluation, we’d have branches for each of these definitions.

\n

In what we’ve discussed before, we effectively allow evaluation to take infinite time, as well as infinite space and infinite branchial space. But now we’ve got the new concept of infinite rulial space. We might say from the outset that, for example, we’re going to use all possible rules. Or we might have what amounts to a dynamical process that generates possible rules.

\n

And the key point is that as soon as that process is in effect computation universal, there is a way to translate from one instance of it to another. Different specific choices will lead to a different basis—but in the end they’ll all eventually generate the full ruliad.

\n

And actually, this is where the whole concept of expression evaluation ultimately merges with fundamental physics. Because in both cases, the limit of what we’re doing will be exactly the same: the full ruliad.

\n

The Practical Computing Story

\n

The formalism we’ve discussed here—and particularly its correspondence with fundamental physics—is in many ways a new story. But it has precursors that go back more than a century. And indeed as soon as industrial processes—and production lines—began to be formalized, it became important to understand interdependencies between different parts of a process. By the 1920s flowcharts had been invented, and when digital computers were developed in the 1940s they began to be used to represent the “flow” of programs (and in fact Babbage had used something similar even in the 1840s). At first, at least as far as programming was concerned, it was all about the “flow of control”—and the sequence in which things should be done. But by the 1970s the notion of the “flow of data” was also widespread—in some ways reflecting back to actual flow of electrical signals. In some simple cases various forms of “visual programming”—typically based on connecting virtual wires—have been popular. And even in modern times, it’s not uncommon to talk about “computation graphs” as a way to specify how data should be routed in a computation, for example in sequences of operations on tensors (say for neural net applications).

\n

A different tradition—originating in mathematics in the late 1800s—involved the routine use of “abstract functions” like f(x). Such abstract functions could be used both “symbolically” to represent things, and explicitly to “compute” things. All sorts of (often ornate) formalism was developed in mathematical logic, with combinators arriving in 1920, and lambda calculus in 1935. By the late 1950s there was LISP, and by the 1970s there was a definite tradition of “functional programming” involving the processing of things by successive application of different functions.

\n

The question of what really depended on what became more significant whenever there was the possibility of doing computations in parallel. This was already being discussed in the 1960s, but became more popular in the early 1980s, and in a sense finally “went mainstream” with GPUs in the 2010s. And indeed our discussion of causal graphs and spacelike separation isn’t far away from the kind of thing that’s often discussed in the context of designing parallel algorithms and hardware. But one difference is that in those cases one’s usually imagining having a “static” flow of data and control, whereas here we’re routinely considering causal graphs, etc. that are being created “on the fly” by the actual progress of a computation.

\n

In many situations—with both algorithms and hardware—one has precise control over when different “events” will occur. But in distributed systems it’s also common for events to be asynchronous. And in such cases, it’s possible to have “conflicts”, “race conditions”, etc. that correspond to branchlike separation. There have been various attempts—many originating in the 1970s—to develop formal “process calculi” to describe such systems. And in some ways what we’re doing here can be seen as a physics-inspired way to clarify and extend these kinds of approaches.

\n

The concept of multiway systems also has a long history—notably appearing in the early 1900s in connection with game graphs, formal group theory and various problems in combinatorics. Later, multiway systems would implicitly show up in considerations of automated theorem proving and nondeterministic computation. In practical microprocessors it’s been common for a decade or so to do “speculative execution” where multiple branches in code are preemptively followed, keeping only the one that’s relevant given actual input received.

\n

And when it comes to branchlike separation, a notable practical example arises in version control and collaborative editing systems. If a piece of text has changes at two separated places (“spacelike separation”), then these changes (“diffs”) can be applied in any order. But if these changes involve the same content (e.g. same characters) then there can be a conflict (“merge conflict”) if one tries to apply the changes—in effect reflecting the fact that these changes were made by branchlike-separated “change events” (and to trace them requires creating different “forks” or what we might call different histories).

\n

It’s perhaps worth mentioning that as soon as one has the concept of an “expression” one is led to the concept of “evaluation”—and as we’ve seen many times here, that’s even true for arithmetic expressions, like 1 + (2 + 3). We’ve been particularly concerned with questions about “what depends on what” in the process of evaluation. But in practice there’s often also the question of when evaluation happens. The Wolfram Language, for example, distinguishes between “immediate evaluation” done when a definition is made, and “delayed evaluation” done when it’s used. There’s also lazy evaluation where what’s immediately generated is a symbolic representation of the computation to be done—with steps or pieces being explicitly computed only later, when they are requested.

\n

But what really is “evaluation”? If our “input expression” is 1 + 1, we typically think of this as “defining a computation that can be done”. Then the idea of the “process of evaluation” is that it does that computation, deriving a final “value”, here 2. And one view of the Wolfram Language is that its whole goal is to set up a collection of transformations that do as many computations that we know how to do as possible. Some of those transformations effectively incorporate “factual knowledge” (like knowledge of mathematics, or chemistry, or geography). But some are more abstract, like transformations defining how to do transformations, say on patterns.

\n

These abstract transformations are in a sense the easiest to trace—and often above that’s what we’ve concentrated on. But usually we’ve allowed ourselves to do at least some transformations—like adding numbers—that are built into the “insides” of the Wolfram Language. It’s perhaps worth mentioning that in conveniently representing such a broad range of computational processes the Wolfram Language ends up having some quite elaborate evaluation mechanisms. A common example is the idea of functions that “hold their arguments”, evaluating them only as “specifically requested” by the innards of the function. Another—that in effect creates a “side chain” to causal graphs—are conditions (e.g. associated with /;) that need to be evaluated to determine whether patterns are supposed to match.

\n

Evaluation is in a sense the central operation in the Wolfram Language. And what we’ve seen here is that it has a deep correspondence with what we can view as the “central operation” of physics: the passage of time. Thinking in terms of physics helps organize our thinking about the process of evaluation—and it also suggests some important generalizations, like multiway evaluation. And one of the challenges for the future is to see how to take such generalizations and “package” them as part of our computational language in a form that we humans can readily understand and make use of.

\n

Some Personal History: Recursion Control in SMP

\n

It was in late 1979 that I first started to design my SMP (“Symbolic Manipulation Program”) system. I’d studied both practical computer systems and ideas from mathematical logic. And one of my conclusions was that any definition you made should always get used, whenever it could. If you set , then you set , you should get (not ) if you asked for . It’s what most people would expect should happen. But like almost all fundamental design decisions, in addition to its many benefits, it had some unexpected consequences. For example, it meant that if you set without having given a value for , you’d in principle get an infinite loop.

\n

Back in 1980 there were computer scientists who asserted that this meant the “infinite evaluation” I’d built into the core of SMP “could never work”. Four decades of experience tells us rather definitively that in practice they were wrong about this (essentially because people just don’t end up “falling into the pothole” when they’re doing actual computations they want to do). But questions like those about made me particularly aware of issues around recursive evaluation. And it bothered me that a recursive factorial definition like f[n_]:=n f[n–1] (the rather less elegant SMP notation was f[$n]::$n f[$1-1]) might just run infinitely if it didn’t have a base case (f[1] = 1), rather than terminating with the value 0, which it “obviously should have”, given that at some point one’s computing 0×….

\n

So in SMP I invented a rather elaborate scheme for recursion control that “solved” this problem. And here’s what happens in SMP (now running on a reconstructed virtual machine):

\n

SMP code

\n

And, yes, if one includes the usual base case for factorial, one gets the usual answer:

\n

SMP code

\n

So what is going on here? Section 3.1 of the SMP documentation in principle tells the story. In SMP I used the term “simplification” for what I’d now call “evaluation”, both because I imagined that most transformations one wanted would make things “simpler” (as in ), and because there was a nice pun between the name SMP and the function Smp that carried out the core operation of the system (yes, SMP rather foolishly used short names for built-in functions). Also, it’s useful to know that in SMP I called an ordinary expression like f[x, y, …] a “projection”: its “head” f was called its “projector”, and its arguments x, y, … were called “filters”.

\n

As the Version 1.0 documentation from July 1981 tells it, “simplification” proceeds like this:

\n

Click to enlarge

\n

By the next year, it was a bit more sophisticated, though the default behavior didn’t change:

\n

Click to enlarge

\n

With the definitions above, the value of f itself was (compare Association in Wolfram Language):

\n

SMP code

\n

But the key to evaluation without the base case actually came in the “properties” of multiplication:

\n

SMP code

\n

In SMP True was (foolishly) 1. It’s notable here that Flat corresponds to the attribute Flat in Wolfram Language, Comm to Orderless and Ldist to Listable. (Sys indicated that this was a built-in system function, while Tier dealt with weird consequences of the attempted unification of arrays and functions into an association-like construct.) But the critical property here was Smp. By default its value was Inf (for Infinity). But for Mult (Times) it was 1.

\n

And what this did was to tell the SMP evaluator that inside any multiplication, it should allow a function (like f) to be called recursively at most once before the actual multiplication was done. Telling SMP to trace the evaluation of f[5] we then see:

\n

SMP code

\n

So what’s going on here? The first time f appears inside a multiplication its definition is used. But when f appears recursively a second time, it’s effectively frozen—and the multiplication is done using its frozen form, with the result that as soon as a 0 appears, one just ends up with 0.

\n

Reset the Smp property of Mult to infinity, and the evaluation runs away, eventually producing a rather indecorous crash:

\n

SMP code

\n

In effect, the Smp property defines how many recursive evaluations of arguments should be done before a function itself is evaluated. Setting the Smp property to 0 has essentially the same effect as the HoldAll attribute in Wolfram Language: it prevents arguments from being evaluated until a function as a whole is evaluated. Setting Smp to value k basically tells SMP to do only k levels of “depth-first” evaluation before collecting everything together to do a “breadth-first evaluation”.

\n

Let’s look at this for a recursive definition of Fibonacci numbers:

\n

SMP code

\n

With the Smp property of Plus set to infinity, the sequence of evaluations of f follows a pure “depth-first” pattern

\n

SMP code

\n

where we can plot the sequence of f[n] evaluated as:

\n
\n
\n

\n

But with the default setting of 1 for the Smp property of Plus the sequence is different

\n

SMP code

\n

and now the sequence of f[n] evaluated is:

\n
\n
\n

\n

In the pure depth-first case all the exponentially many leaves of the Fibonacci tree are explicitly evaluated. But now the evaluation of f[n] is being frozen after each step and terms are being collected and combined. Starting for example from f[10] we get f[9]+f[8]. And evaluating another step we get f[8]+f[7]+f[7]+f[6]. But now the f[7]’s can be combined into f[8]+2f[7]+f[6] so that they don’t both have to separately be evaluated. And in the end only quadratically many separate evaluations are needed to get the final result.

\n

I don’t now remember quite why I put it in, but SMP also had another piece of recursion control: the Rec property of a symbol—which basically meant “it’s OK for this symbol to appear recursively; don’t count it when you’re trying to work out whether to freeze an evaluation”.

\n

And it’s worth mentioning that SMP also had a way to handle the original issue:

\n

Click to enlarge

\n

It wasn’t a terribly general mechanism, but at least it worked in this case:

\n

SMP code

\n

I always thought that SMP’s “wait and combine terms before recursing” behavior was quite clever, but beyond the factorial and Fibonacci examples here I’m not sure I ever found clear uses for it. Still, with our current physics-inspired way of looking at things, we can see that this behavior basically corresponded to picking a “more spacetime-like” foliation of the evaluation graph.

\n

And it’s a piece of personal irony that right around the time I was trying to figure out recursive evaluation in SMP, I was also working on gauge theories in physics—which in the end involve very much the same kinds of issues. But it took another four decades—and the development of our Physics Project—before I saw the fundamental connection between these things.

\n

After SMP: Further Personal History

\n

The idea of parallel computation was one that I was already thinking about at the very beginning of the 1980s—partly at a theoretical level for things like neural nets and cellular automata, and partly at a practical level for SMP (and indeed by 1982 I had described a Ser property in SMP that was supposed to ensure that the arguments of a particular function would always get evaluated in a definite order “in series”). Then in 1984 I was involved in trying to design a general language for parallel computation on the Connection Machine “massively parallel” computer. The “obvious” approach was just to assume that programs would be set up to operate in steps, even if at each step many different operations might happen in parallel. But I somehow thought that there must be a better approach, somehow based on graphs, and graph rewriting. But back then I didn’t, for example, think of formulating things in terms of causal graphs. And while I knew about phenomena like race conditions, I hadn’t yet internalized the idea of constructing multiway graphs to “represent all possibilities”.

\n

When I started designing Mathematica—and what’s now the Wolfram Language—in 1986, I used the same core idea of transformation rules for symbolic expressions that was the basis for SMP. But I was able to greatly streamline the way expressions and their evaluation worked. And not knowing compelling use cases, I decided not to set up the kind of elaborate recursion control that was in SMP, and instead just to concentrate on basically two cases: functions with ordinary (essentially leftmost-innermost) evaluation and functions with held-argument (essentially outermost) evaluation. And I have to say that in three decades of usages and practical applications I haven’t really missed having more elaborate recursion controls.

\n

In working on A New Kind of Science in the 1990s, issues of evaluation order first came up in connection with “symbolic systems” (essentially, generalized combinators). They then came up more poignantly when I explored the possible computational “infrastructure” for spacetime—and indeed that was where I first started explicitly discussing and constructing causal graphs.

\n

But it was not until 2019 and early 2020, with the development of our Physics Project, that clear concepts of spacelike and branchlike separation for events emerged. The correspondence with expression evaluation got clearer in December 2020 when—in connection with the centenary of their invention—I did an extensive investigation of combinators (leading to my book Combinators). And as I started to explore the general concept of multicomputation, and its many potential applications, I soon saw the need for systematic ways to think about multicomputational evaluation in the context of symbolic language and symbolic expressions.

\n

In both SMP and Wolfram Language the main idea is to “get results”. But particularly for debugging it’s always been of interest to see some kind of trace of how the results are obtained. In SMP—as we saw above—there was a Trace property that would cause any evaluation associated with a particular symbol to be printed. But what about an actual computable representation of the “trace”? In 1990 we introduced the function Trace in the Wolfram Language—which produces what amounts to a symbolic representation of an evaluation process.

\n

I had high hopes for Trace—and for its ability to turn things like control flows into structures amenable to direct manipulation. But somehow what Trace produces is almost always too difficult to understand in real cases. And for many years I kept the problem of “making a better Trace” on my to-do list, though without much progress.

\n

The problem of “exposing a process of computation” is quite like the problem of presenting a proof. And in 2000 I had occasion to use automated theorem proving to produce a long proof of my minimal axiom system for Boolean algebra. We wanted to introduce such methods into Mathematica (or what’s now the Wolfram Language). But we were stuck on the question of how to represent proofs—and in 2007 we ended up integrating just the “answer” part of the methods into the function FullSimplify.

\n

By the 2010s we’d had the experience of producing step-by-step explanations in Wolfram|Alpha, as well as exploring proofs in the context of representing pure-mathematical knowledge. And finally in 2018 we introduced FindEquationalProof, which provided a symbolic representation of proofs—at least ones based on successive pattern matching and substitution—as well as a graphical representation of the relationships between lemmas.

\n

After the arrival of our Physics Project—as well as my exploration of combinators—I returned to questions about the foundations of mathematics and developed a whole “physicalization of metamathematics” based on tracing what amount to multiway networks of proofs. But the steps in these proofs were still in a sense purely structural, involving only pattern matching and substitution.

\n

I explored other applications of “multicomputation”, generating multiway systems based on numbers, multiway systems representing games, and so on. And I kept on wondering—and sometimes doing livestreamed discussions about—how best to create a language design around multicomputation. And as a first step towards that, we developed the TraceGraph function in the Wolfram Function Repository, which finally provided a somewhat readable graphical rendering of the output of Traceand began to show the causal dependencies in at least single-way computation. But what about the multiway case? For the Physics Project we’d already developed MultiwaySystem and related functions in the Wolfram Function Repository. So now the question was: how could one streamline this and have it provide essentially a multiway generalization of TraceGraph? We began to think about—and implement—concepts like Multi, and imagine ways in which general multicomputation could encompass things like logic programming and probabilistic programming, as well as nondeterministic and quantum computation.

\n

But meanwhile, the “ question” that had launched my whole adventure in recursion control in SMP was still showing up—43 years later—in the Wolfram Language. It had been there since Version 1.0, though it never seemed to matter much, and we’d always handled it just by having a global “recursion limit”—and then “holding” all further subevaluations:

\n

\n

But over the years there’d been increasing evidence that this wasn’t quite adequate, and that for example further processing of the held form (even, for example, formatting it) could in extreme cases end up triggering even infinite cascades of evaluations. So finally—in Version 13.2 at the end of last year—we introduced the beginnings of a new mechanism to cut off “runaway” computations, based on a construct called TerminatedEvaluation:

\n
\n
\n

\n

And from the beginning we wanted to see how to encode within TerminatedEvaluation information about just what evaluation had been terminated. But to do this once again seemed to require having a way to represent the “ongoing process of evaluation”—leading us back to Trace, and making us think about evaluation graphs, causal graphs, etc.

\n

At the beginning x = x + 1 might just have seemed like an irrelevant corner case—and for practical purposes it basically is. But already four decades ago it led me to start thinking not just about the results of computations, but also how their internal processes can be systematically organized. For years, I didn’t really connect this to my work on explicit computational processes like those in systems such as cellular automata. Hints of such connections did start to emerge as I began to try to build computational models of fundamental physics. But looking back I realize that in x = x + 1 there was already in a sense a shadow of what was to come in our Physics Project and in the whole construction of the ruliad.

\n

Because x = x + 1 is something which—like physics and like the ruliad—necessarily generates an ongoing process of computation. One might have thought that the fact that it doesn’t just “give an answer” was in a sense a sign of uselessness. But what we’ve now realized is that our whole existence and experience is based precisely on “living inside a computational process” (which, fortunately for us, hasn’t just “ended with an answer”). Expression evaluation is in its origins intended as a “human-accessible” form of computation. But what we’re now seeing is that its essence also inevitably encompasses computations that are at the core of fundamental physics. And by seeing the correspondence between what might at first appear to be utterly unrelated intellectual directions, we can expect to inform both of them. Which is what I have started to try to do here.

\n

Notes & Thanks

\n

What I’ve described here builds quite directly on some of my recent work, particularly as covered in my books Combinators: A Centennial View and Metamathematics: Physicalization & Foundations. But as I mentioned above, I started thinking about related issues at the beginning of the 1980s in connection with the design of SMP, and I’d like to thank members of the SMP development team for discussions at that time, particularly Chris Cole, Jeff Greif and Tim Shaw. Thanks also to Bruce Smith for his 1990 work on Trace in Wolfram Language, and for encouraging me to think about symbolic representations of computational processes. In much more recent times, I’d particularly like to thank Jonathan Gorard for his extensive conceptual and practical work on multiway systems and their formalism, both in our Physics Project and beyond. Some of the directions described here have (at least indirectly) been discussed in a number of recent Wolfram Language design review livestreams, with particular participation by Ian Ford, Nik Murzin, and Christopher Wolfram, as well as Dan Lichtblau and Itai Seggev. Thanks also to Wolfram Institute fellows Richard Assar and especially Nik Murzin for their help with this piece.

\n", - "category": "Computational Science", - "link": "https://writings.stephenwolfram.com/2023/09/expression-evaluation-and-fundamental-physics/", - "creator": "Stephen Wolfram", - "pubDate": "Fri, 29 Sep 2023 21:48:31 +0000", - "enclosure": "", - "enclosureType": "", - "image": "", - "id": "", - "language": "en", - "folder": "", - "feed": "wolfram", - "read": false, - "favorite": false, - "created": false, - "tags": [], - "hash": "7936f5db0afca7e042169bdf56dcba3d", - "highlights": [] - }, { "title": "Remembering Doug Lenat (1950–2023) and His Quest to Capture the World with Logic", "description": "\"\"Logic, Math and AI In many ways the great quest of Doug Lenat’s life was an attempt to follow on directly from the work of Aristotle and Leibniz. For what Doug was fundamentally trying to do over the forty years he spent developing his CYC system was to use the framework of logic—in more or […]", @@ -197,28 +395,6 @@ "hash": "320dd32e82e253f0b7c0b7b8282d0a27", "highlights": [] }, - { - "title": "Remembering the Improbable Life of Ed Fredkin (1934–2023) and His World of Ideas and Stories", - "description": "\"\"Programmer of the Universe “OK, so let me tell you…” And so it would begin. A long and colorful story. An elaborate description of a wild idea. In the forty years I knew Ed Fredkin I heard countless wild ideas and colorful stories from him. He always radiated a certain adventurous joy—together with supreme, almost-childlike […]", - "content": "\"\"\n

Programmer of the Universe

\n

Click to enlarge

\n

“OK, so let me tell you…” And so it would begin. A long and colorful story. An elaborate description of a wild idea. In the forty years I knew Ed Fredkin I heard countless wild ideas and colorful stories from him. He always radiated a certain adventurous joy—together with supreme, almost-childlike confidence. Ed was someone who wanted to independently figure things out for himself, and delighted in presenting his often somewhat-outlandish conclusions—whether about technology, science, business or the world—with dramatic showman-like panache.

\n

In all the years I knew Ed, I’m not sure he ever really listened to anything I said (though he did use tools I built). He used to like to tell people I’d learned a lot from him. And indeed we had intellectual interests that should have overlapped. But in actuality our ways of thinking about them mostly didn’t connect much at all. But at a personal and social level it was still always a lot of fun being around Ed and being exposed to his unique intense opportunistic energy—with its repeating themes but ever-changing directions.

\n

And there was one way in which Ed and I were very much aligned: both of our lives were deeply influenced by computers and computing. Ed had started with computers in 1956—as part of one of the very first cohorts of programmers. And perhaps on the basis of that experience, he would still, even at the end of his life, matter-of-factly refer to himself as “the world’s best programmer”. Indeed, so confident was he of his programming prowess that he became convinced that he should in effect be able to write a program for the universe—and make all of physics into a programming problem. It didn’t help that his knowledge of physics was at best spotty (and, for example, I don’t think he ever really learned calculus). But his almost lifelong desire to “program physics” did successfully lead him to the concept of reversible logic, and to what’s now called the “Fredkin gate”. But it also led him to the idea that the universe must be a giant cellular automaton—whose program he could invent.

\n

I first met Ed in 1982—on an island in the Caribbean he had bought with money from taking public a tech company he’d founded. The year before, I had started studying cellular automata, but, unlike Ed, I wasn’t trying to “program” them—to be the universe or anything else. Instead, I was mostly doing what amounted to empirical science, running computer experiments to see what they did, and treating them as part of a computational universe of possible programs “out there to explore”. It wasn’t a methodology I think Ed ever really understood—or cared about. He was a programmer (and inventor), not an empirical scientist. And he was convinced—like a modern analog of an ancient Greek philosopher—that by pure thought he could come up with the whole “clockwork” of the universe.

\n

Central to his picture was the idea that at the bottom of everything was a cellular automaton, with its grid of cells somehow laid out in space. I told Ed countless times that what was known from twentieth-century physics implied this really couldn’t be how things worked at a fundamental level. I tried to interest Ed in my way of using cellular automata. But Ed wasn’t interested. He was going for what he saw as the big prize: using them to “construct the universe”.

\n

Every few years Ed would tell me he’d made progress—and rather dramatically say things like that he’d “found the electron”. I’d politely ask for details. Then start pointing out that it couldn’t work that way. But soon Ed would be telling a story or talking about some completely different idea—about technology, business or something else.

\n

By the mid-1980s I’d discovered a lot about cellular automata. And I always felt a bit embarrassed by Ed’s attempt to use them in what seemed to me like a very naive way for fundamental physics—and I worried (as did happen a few times) that people would dismiss my efforts by identifying them with his.

\n

My own career had begun in the 1970s with traditional fundamental physics. And while I didn’t think cellular automata as such could be directly applied to fundamental physics, I did think that the core computational phenomena I’d discovered through studying cellular automata might be very relevant. And then in the early 1990s I had an idea. In a cellular automaton, space has a fixed grid-like structure. But what if the structure of space is in fact dynamic, and everything in the universe emerges just from the dynamics of that structure? Finally I felt as if there might be a plausible computational foundation for fundamental physics.

\n

I wrote about this in one chapter of my 2002 book A New Kind of Science. I don’t know if Ed ever read what I wrote, but in any case it didn’t seem to affect his idea that the universe was a cellular automaton—and to confuse things further, he told quite a few people that was what I was saying too. At first I found this frustrating—and upsetting—but eventually I realized it was just “Ed being Ed”, and there were still plenty of things to like about Ed.

\n

Nearly twenty years passed. I would see Ed with some regularity. And sometimes I would mention physics. But Ed would just keep talking about his idea that the universe is a cellular automaton. And when we finally made the breakthrough that led in 2020 to our Physics Project it made me a little sad that I didn’t even try to explain it to Ed. The universe isn’t a cellular automaton. But it is computational. And I think that knowing this would have brought a certain intellectual closure to Ed’s long journey and aspirations around physics.

\n

Ed might have considered physics his single most important quest. But Ed’s life as a whole was filled with a remarkably rich assortment of activities and interests. Computers. Inventions. Companies. Airplanes. MIT. His island. The Soviet Union. Not to mention people, like Marvin Minsky, John McCarthy and Richard Feynman (as well as Tom Watson, Richard Branson, and many more). And he would tell stories about all these people and things, and more. Sometimes (particularly later in his life) the stories would repeat. But with remarkable regularity Ed would surprise me with yet another—often at first hard-to-believe—story about a situation or topic that I had no idea he’d ever been involved in.

\n

But what was the “whole Ed story”? I knew a lot of fragments, often quite colorful. But they didn’t seem to fit together into the narrative of a life. And now that Ed is sadly no longer with us, I decided I should really try to “understand Ed” and his story. A few times over the years I had made efforts to ask Ed for systematic historical accounts—and in 2014 I even recorded many hours of oral history with him. But there was clearly much more. And in writing this piece I found myself going through lots of documents and archives—and having quite a few conversations— and unearthing even yet more stories than I already knew. And in the end there’s a lot to say—and indeed this has turned into the most difficult and complicated biographical piece I’ve ever written. But I hope that everything I’ve assembled will help tell the often so-wild-you-can’t-make-this-stuff-up story of that most singular individual who I knew all those years.

\n

The Beginning of the Story

\n

Ed never said much to me about his early life. And in fact I think it was only in writing this piece that I even learned he’d grown up in Los Angeles (specifically, East Hollywood). His parents were both (Jewish) Russian immigrants (his father was born in St. Petersburg; his mother in Odessa; they met in LA). His father’s university engineering studies had been cut short by the Russian Revolution, and he now had a one-man wholesale electronic parts business. His mother had in her youth been trained as a concert pianist, and died when Ed was 11, leaving a somewhat fragmented family situation. Ed had a half-sister, 14 years older than him, a brother 6 years older, and a sister a year older. As he told it in later oral histories, he got interested in both machines and money very early, repairing appliances for a fee even as a tween, and soon learning about the idea of owning stock in companies.

\n

But Ed Fredkin’s first piece of public visibility seems to have come in 1948, when he was 13 years old—and it reminds me so much of many of Ed’s later “self-imposed” adventures. There was at that time an exhibition of historic US documents traveling around the country on a train named the Freedom Train. And when the train came to Los Angeles, the young Ed Fredkin decided he had to be the first person to see it:

\n

Click to enlarge

\n

The Los Angeles Times published his account of his adventure—a younger but “quintessentially Ed” story:

\n

Click to enlarge

\n

Ed’s record in high school was at best spotty. But as he tells it, he figured out very early a system for improving the odds in multiple-choice tests, and for example in 9th grade got a top score on a newly instituted (multiple-choice) California-wide IQ test. At the end of high school, Ed applied to Caltech (which was only 13 miles away from where he lived), and largely on the basis of his test scores, was admitted. He ended up spending time working various jobs to support himself, didn’t do much homework, and by his sophomore year—before having to pick a major—dropped out. In 2015 Ed told me a nice story about his time at Caltech:

\n
\n

In 1952–53, I was a student in Linus Pauling’s class where he lectured Freshman Chemistry at Caltech. After class, one day, I asked Pauling “What is a superconductor at the highest known temperature?” Pauling immediately replied “Niobium Nitride, 18 Kelvin”. I was puzzled because I had never heard of Niobium, so I looked it up and, with some difficulty found a reference that defined it as a European name for the metal Columbium.

\n
\n
\n

Later that same day, reading a Pasadena newspaper, I saw an article about Pauling: It announced that Pauling had just returned from Europe (London is what I recall) where Pauling, as Chairman of the International Committee on the naming of the elements, had decided that henceforth the metal Columbium would be renamed Niobium.

\n
\n
\n

I recently looked into that matter and discovered that evidently that renaming was part of a USA–Europe Compromise… In Europe it had been Wolfram and Niobium, in the USA it had been Tungsten and Columbium.

\n
\n
\n

Europe got its way re Niobium and the USA got its way re Tungsten… Perhaps it was a flip of a coin? Someone might know.

\n
\n
\n

As a Wolfram, I thought you might be interested (and, of course, perhaps all this is old hat to you…).

\n
\n

(For what it’s worth, I actually didn’t know this “Wolfram story”, though the details weren’t quite as dramatic as Ed said: the “niobium” decision was actually made in 1949, without Pauling specifically involved, though Pauling did indeed travel to London just before the beginning of the 1952 school year.)

\n

With his interest in machinery, Ed had always been keen on cars, and in his freshman year at Caltech, he also decided to learn to fly a plane. Ed’s older brother, Norman, had joined the Air Force five years earlier. And when he left Caltech—in 1954 at age 19—Ed joined the Air Force too. (If he hadn’t done that, he would have been drafted into the Army.) Ed’s brother Norman (who would spend his whole career in aviation) had been involved in the Korean War, particularly doing aerial reconnaissance—here pictured with his plane (and, no, there don’t seem to be any Air Force pictures of Ed himself):

\n

Click to enlarge

\n

By the time Ed joined the Air Force, the Korean War was over. Ed was assigned to an airbase in Arizona, and by the summer of 1955 he had qualified as a fighter pilot. Ed was never officially a “test pilot”, but he told me stories about figuring out how to take his plane higher than anyone else—and achieving weightlessness by flying his plane in a perfect free-fall trajectory by maintaining an eraser floating in midair in front of him.

\n

By 1956 Ed had been grounded from flying as a result of asthma, and was now at an airbase in Florida as an “intercept controller”—essentially an air traffic controller responsible for guiding fighters to intercept bombers. It was a time when the Air Force was developing the SAGE (Semi-Automatic Ground Environment) air defense system—a huge project whose concept was to use computers to coordinate data from many radars so as to be able to intercept Soviet bombers that might attack the US (cf. Dr. Strangelove, etc.). The center of SAGE development was Lincoln Lab (then part of MIT) in Lexington, MA—with IBM providing computers, Bell (AT&T) providing telecommunications, RAND providing algorithms, etc. And in mid-1956 the Air Force sent a group—including Ed—to test the next phase of SAGE. But as Ed tells it, they were soon informed that actually there would be a one-year delay.

\n

At the time, the SAGE project was busily trying to train people about computers, and some people from the Air Force stayed in the Boston area to participate in this. As Ed tells it, however, he was the only one who didn’t drop out of the training—and over the course of a year it taught him “much of what was then known about computer programming and computer hardware design”. There were at the time only a few hundred people in the world who could call themselves programmers. And Ed was now one of them. (Perhaps he was even “the world’s best”.)

\n

Computers!

\n

Having learned to program, Ed remained at Lincoln Lab, paid by the Air Force, doing what amounted to computational “odd jobs”. Often this had to do with connecting systems together, or coming up with “clever hacks” to overcome particular system limitations. Occasionally it was a little more algorithmic—like when Sputnik was launched in 1957, and Ed got pulled into a piece of “emergency programming” for orbit calculations.

\n

Ed told many stories about “hacking” the bureaucracy at the Air Force (being given a “Secret” stamp so he could read his own documents; avoiding being sent for a year to the Canadian Arctic by finding a loophole associated with his wife being pregnant, etc.)—and in 1958 he left the Air Force (though he would remain a captain in the reserves for many years), but stayed on at Lincoln Lab. Officially he was there as an “administrative assistant”, because—without a degree—that was all they could offer him. But by then he was becoming known as a “computer person”—with lots of ideas. He wanted to start his own company. And (as he tells it) the very first potential customer he visited was an MIT-spinoff acoustics firm called Bolt Beranek & Newman (BBN). And the person he saw there was their “vice president of engineering psychology”—a certain J. C. R. “Lick” Licklider—who persuaded Ed to join BBN to “teach them about computers”.

\n

It didn’t really come to light until he was at BBN, but while at Lincoln Lab Ed had made what would eventually become his first lasting contribution to computer science. He thought of it as a new way of storing textual information in a computer, and he called it “TRIE memory” (after “reTRIEval”). Nowadays we’d call it the trie (or prefix tree) data structure. Here it is for some common words in English made from the letters of “wolf”:

\n
\n
\n

\n

Licklider persuaded Ed to write a paper about tries—which appeared in 1960, and for a couple of decades was essentially Ed’s only academic-style publication:

\n

Click to enlarge

\n

The paper has a pretty clear description of tries, even with some nice diagrams:

\n

Click to enlarge

\n

Even in analyzing the performance of tries, there was only the faintest hint of math in the paper—though Ed realized (probably with input from Licklider) that the efficiency of tries would depend on the Shannon-style redundancy of what they were storing, and he ran Monte Carlo simulations to investigate this:

\n

Click to enlarge

\n

(He explains: “The test program was written in FORTRAN for the IBM 709. The program is composed of 42 subroutines, of which 19 were coded specially for this program and 23 were taken from the library.”)

\n

Tries didn’t make a splash when Ed first introduced them—not least because computers didn’t really have the memory then to make use of them. I think I first heard about them in the late 1970s in connection with spellchecking, and nowadays they’re widely used in lots of text search, bioinformatics and other applications.

\n

Ed had apparently first started talking about tries when he was still in the Air Force. As he explained it to me in 2014:

\n
\n

The Air Force [people] had no idea [what I was talking about]. But I kept on [saying] “I need to find someone who knows something about this that can critique it for me.” And someone says to me, “There’s a guy at MIT who deals in something similar, he calls it lists”. And that was John McCarthy. So, I call up, I get a secretary and, you know, I make a date, and I go to MIT and in building 56 with the computation center, I go to his office and the secretary says he’s somewhere out in the hall. I see some guy wandering back and forth. I go up and say, “You John McCarthy?” He says, “Yes.” So, I say, “I’ve had this idea—” I can’t remember if I was in uniform or not; I might’ve been. I said, “I had this idea, and I’ve written a program and tested it. And might you take a look?” Then he takes this thing, and he starts to read it.

\n
\n
\n

Then he did something that struck me as very weird. He turned around slowly and started walking away, he’s reading and walk, walk, walk, walk, stop. Turns around, walk, walk, walk, walk, back slowly, you know. Finally, he comes back and he stops and he reads and reads. And he’s obviously angry. And I thought, “This is weird.” I said “Does it make sense or anything?” He says, “Yes, it makes sense.” And I said, “Well, what’s up?” He says, “Well, I’ve had the same idea.” And I said, “Oh.” He says, “But I’ve never written it down.” And I said, “Oh, okay. So, do you think I ought to work on it or do something?” He says, “Yeah”. So, that’s how I met John McCarthy.

\n
\n

Ed remained friends with McCarthy for the rest of McCarthy’s life, and involved him in many of his endeavors. In 1956 McCarthy had been one of the organizers of the conference that coined the term “artificial intelligence”, and in 1958 McCarthy began the development of LISP (which was based on linked lists). I have to say I wish I’d known Ed’s story with McCarthy much earlier; I would have handled my own interactions with McCarthy differently—because, as it was, over the course of various encounters from 1981 to 2003 I never persisted very far beyond the curmudgeon stage.

\n

Back around 1958, the circle of “serious computer people” in the Boston area wasn’t very large—and another was Marvin Minsky (who I knew for many years). Between Ed and Licklider, both McCarthy and Minsky became consultants at BBN, and all of them would have many interactions in the years to come.

\n

But in late 1959 there was another entrant in the Boston computer scene: the PDP-1 computer, designed by a certain Ben Gurley for a new company named Digital Equipment Corporation (DEC) that had essentially spun off from Lincoln Lab and MIT. BBN was the first customer for the PDP-1, and Ed was its anchor user:

\n

Click to enlarge

\n

John McCarthy had had the “theoretical” idea of timesharing, whereby multiple users could work on a single computer. Ed figured out how to make it practical on the PDP-1, in the process inventing what would now be called asynchronous interrupts (then the “sequence break system”). And so began a process which led BBN to become a significant force in computing, the creation of the internet, etc.

\n

But in 1961, Ed and a certain Roland Silver, who also worked at BBN, decided to quit BBN—and, strangely enough, to move to Brazil, where they were enamored of the recently elected new president. But when that new president unexpectedly resigned, they abandoned their plan. And when BBN didn’t want them back, Ed decided to start a company, initially doing consulting for DEC. As Ed tells it, he and Roland Silver were such good friends and had so much they talked about that together they couldn’t get anything done, so they decided they’d better split up.

\n

As I was writing this piece, I decided to look up more about Roland Silver—who I found out had been a college roommate of Marvin Minsky’s at Harvard, and had had a long career in math, etc. at MITRE (the holding company for Lincoln Lab). But I also remembered that many years ago I’d received letters and a rather new-age newsletter from a certain “Rollo Silver”:

\n

Click to enlarge

\n

Could it be the same person? Yes! And in my archives I also found an ad:

\n

Click to enlarge

\n

Some time after my work on cellular automata in the 1980s, Roland Silver—together with my longtime friend Rudy Rucker—started a newsletter about cellular automata, notably not mentioning Ed, but including a colorful bio for Silver:

\n

Click to enlarge

\n

“Triple-I” (III)

\n

But back to Ed and his story. It was 1961, and Ed had quit his job at BBN. In 1957, he’d met on a Cape Cod beach a woman from Western Massachusetts named Dorothy Abair (who was at the time working at a beauty salon)—and six weeks later they’d married, and now had a 3-year-old daughter. Ed had already lined up some consulting with DEC, and as Ed tells it, with a little “hacking” of bank loans, etc. he was able to officially start Information International Incorporated (III)—with a tiny office in Maynard, MA (home of DEC). But then, one day he gets a call from the Woods Hole Oceanographic Institute. He drives down to Woods Hole with a certain Henry Stommel—an oceanography professor at Harvard—who tells him about a “vortex ocean model”, and asks Ed if he can program it on a PDP-1 so that it displays ocean currents on a screen. And the result is that III soon has a contract for $10k (about $100k today) to do this.

\n

I might add a small footnote here. Years later I was talking to Ed about the origins of cellular automata, and he tells me that a certain Henry Stommel had told him that there were cellular automaton models of sand dunes from the 1930s. At the time—before the web—I couldn’t easily track down who Henry Stommel was (and I had no idea how Ed knew him), and to this day I don’t know what those sand dune models might have been.

\n

But in any case, Ed’s interaction with Woods Hole led to what became III’s first major business: digital reading of film. As Ed tells it:

\n
\n

At Woods Hole … they had these meters which would measure how fast the ocean current was going and which way—and recorded it on 16 mm film with little tiny lights and a little fiber optic thing. And they had built a machine to read that film. I looked at the machine and said “That’ll never work”. And they said “Who are you? Of course it’ll work”, and so on, so forth. OK, so some months later they call me up and say it didn’t work.

\n
\n
\n

I have to tell you this but this is insanely funny. So I decide I’m going to make a film reader and here’s how I’m going to do it. I knew there was a 16 mm projector you could rent from a company and you could stop it and then say “Advance one frame” by clicking and it would just advance one frame at a time. So I thought: say I take the lightbulb out and put a photomultiplier in and point it at the screen of the computer. Then light will come from the screen, go through the lens and be focused on the film, and some would go through the film to the photomultiplier and I would be able to tell how much light got through. And we could write a program to do the rest.

\n
\n
\n

That was my idea, OK.

\n
\n
\n

So not having any money, we rented that projector and I got Digital (DEC) to let me use their milling machine and I bought the photomultiplier tube, and I got Ben Gurley to design the circuitry and connect it to the computer. But there was one more thing. The photomultiplier tube was like a vacuum tube but it had like 16 pins and a very odd connector that no one had. But I thought “Lincoln Labs has parts for everything in their electronics warehouse”. So I called someone I used to work with there, and said “Look, do me a favor and sneak into the parts area, take that part and just give it to me. I’ve ordered one but I’m not going to get it for a while and when I get it I’ll give it to you and you can put it back so it’s not actually a theft.” And he said “OK, I’ll do it” but he asked me why I wanted it and I told him “Well, I’m doing this stuff for Woods Hole to read some film with a computer”.

\n
\n
\n

OK, so he gave me the part and we get it going right away and we’re reading the film, and that solved the problem. But meanwhile this very funny thing happened. Someone from Lincoln Labs found out about all this and said “Hey, you’re reading some kind of film. Is that what you used that thing for?” And I said “Yeah”. And they said “Well, we tried to read some films so we built a gadget and did the same thing you did: we pointed it at the screen of the computer, but we can’t make the software work”. And I said “OK, well, come down and tell me about it”. So they come down and what happens is this. There’s some army people and they have a radar that’s looking at a missile coming in and records on film from an oscilloscope. And they asked could we read this. And to make a long story short they signed another contract….

\n
\n

The whole setup was eventually captured in a patent entitled simply “High-Speed Film Reading”:

\n

Click to enlarge

\n

And actually this wasn’t Ed’s first patent. That had been filed in 1960, while Ed was at BBN—and it was for a mechanical punched card sorter, with arrays of metal pins and the like, and no computer in evidence:

\n

Click to enlarge

\n

III ended up discovering that there were many applications—military and otherwise—for film readers. But their Woods Hole relationship led in another direction as well: computer graphics and data visualization. By 1963 there were perhaps 300,000 oceanographic stations recording their data on punched cards, and the idea was to take this data and produce from it a “computer-compiled oceanographic atlas”. The result was a paper:

\n

Click to enlarge

\n

And with statements like “Only a high-speed computer has the capacity and speed to follow the quickly shifting demands and questions of a human mind exploring a large field of numbers” the paper presented visualizations like:

\n

Click to enlarge

\n

These various developments put III in the center of the emerging field of film-meets-computers systems. The company grew, moving its center of operations to Los Angeles, not least to be near the Systems Development Corporation (SDC) which RAND had spun off as its software arm in response to the SAGE project.

\n

But Ed was always having new ideas for III, and defining new directions. Ed had brought Minsky and McCarthy into III as board members and consultants, and for example in 1964 III was proposing to SDC a project to make a new version of LISP (and, yes, with no obvious film-meets-computers applications). The proposal gives some insight into the state of III at the time. It says that “From a one-man operation [in 1962], I.I.I. has grown to the point where our gross volume of business for 1964 is in the neighborhood of $1 million [about $10 million today]”. It explains that III has four divisions: Mathematical and Programming Services, Behavioral Science, Operations, and “New York”. It goes on to list various things III is doing: (1) LISP; (2) Inductive Inference on Sequences; (3) Computer Time-Sharing; (4) Programmable Film Readers; (5) The World Oceanographic Data Display System; and (6) Computer Display Systems.

\n

It’s certainly an eclectic collection, reflecting, as such things often do, the character of the company’s founder. From a modern perspective, one item that catches one’s attention is:

\n

Click to enlarge

\n

One can think of it as an early attempt at AI/machine learning—which 60 years later still hasn’t been solved. (GPT-4 says the next letter should be Q, not O.)

\n

But distractions or not, it was a talented team that assembled at III—with lots of cross-fertilization with MIT. III’s business progressively grew, and perhaps it outgrew Ed—and in 1965 Ed stepped down as CEO. In 1968 he left entirely and (as we’ll discuss below) went to MIT, leaving III in the hands of Al Fenaughty, who, years later (and after nearly 30 years at III), would become the chairman of Yandex.

\n

As someone who’s curious about the ways of company founders, I asked Ed many times about his departure from III. He usually just said: “I had a partner who died”. But it’s only now that I’ve pieced together, partly from my 2014 oral history with Ed, what happened. Ed described it to me as the greatest tragedy of his life.

\n

Shortly after he set up III, Ed persuaded Ben Gurley (designer of the PDP-1) to leave DEC and join him at III. I think Ed had hoped to build computers at III, with Gurley as their designer. But on November 7, 1963, in Concord, MA, just a few miles from where I am as I write this, Ben Gurley was murdered—by a single revolver shot through his dining room window as he was about to sit down for dinner with his wife and 7 children. An engineer from DEC (and Lincoln Labs)—about whom Gurley had recently complained to the police—was arrested, and eventually convicted of the crime (after Ed hired a private detective to help). It later turned out that a few years earlier the same engineer was likely also responsible for shooting (though not killing) another engineer from DEC.

\n

I had always assumed that Ed’s decision to leave III happened just after his “partner had died”. But I now realize that Gurley’s death early in the history of III caused III to go on its path of making things like film readers, rather than the DEC- or IBM-challenging computers I think Ed had hoped for.

\n

Even after Ed left active management of III, he was still its chairman. And in late 1968 something would happen that would change his life forever. Taking tech companies public on the “over-the-counter” market had become a thing, and a broker offered to take III public. And on November 26, 1968, III filed its SEC paperwork:

\n

Click to enlarge

\n

III’s “principal product to date” is described as a “programmable film reader”, but the paperwork notes that as of October 31, 1968, the company has no film readers on order—though there are orders for its new microfilm reader, which it hasn’t delivered yet. It also says that proceeds from the offering will be used to fund its “proposed optical character recognition project”. But for our purposes what’s perhaps more significant is that the paperwork records that Ed owns 57.7% of the company, with the Edward Fredkin Charitable Foundation owning 0.4%.

\n

On January 8, 1969, III went public, and Ed was suddenly, at least on paper, worth more than $10M (or more than $80M today). Two years later (perhaps as soon as a lockup period expired), Ed cashed out, with the SEC notice indicating that Ed would be “repaying personal indebtedness to a bank incurred by him for reasons unrelated to the company or its business” (presumably a loan he’d taken out before he could achieve liquidity):

\n

Click to enlarge

\n

So now Ed—at age 37—was wealthy. And in fact the money he made from III would basically last the rest of his life, even through a long sequence of subsequent business failures.

\n

III’s OCR project was never a great success, but III became a key company in digital-to-film systems (relevant to both movies and printing), and in the early 1970s created some of the very first computer-generated special effects, that eventually made it into movies like Star Wars. III’s stock price hovered around $10 per share for years, and in 1996—after PostScript had pretty much taken the market for prepress printing systems—III was sold to Autologic for $35M in stock, then in 2001 Autologic was sold to Agfa for $42M.

\n

The Island

\n

When III went public in 1969 it was the height of the Cold War (which probably didn’t hurt III’s military sales). And many people—including Ed—thought World War III might be imminent. And so it was that in 1970 Ed decided to buy an island in the Caribbean, close enough to the tropics, he told me subsequently, that, he assumed (incorrectly according to current models), radioactive fallout from a nuclear war wouldn’t reach it.

\n

Apparently Ed was sitting in a dentist’s office when he saw an “Island for Sale” ad in a newspaper. The seller was a shipwreck-scavenging treasure hunter named Bert Kilbride—sometimes called “the last pirate of the Caribbean”—who had started to develop the island (and for several years would manage it for Ed). It’s a fairly small island (about 125 acres, or 0.2 square miles)—in the British Virgin Islands. And its name is Mosquito Island (or sometimes, with some historical justification, Moskito Island). And when Ed bought it, it probably cost something under $1M. (Richard Branson bought the nearby but smaller Necker Island in 1978.)

\n

I visited Ed’s island in January 1982—the first time I met Ed. And, yes, there was a certain “lair of a Bond villain” (think: Dr. No) vibe to the whole thing. Here are pictures I took from a boat leaving the island (notice the just-visible seaplane parked at the island):

\n

Click to enlarge

\n

There was a small resort (and restaurant) on the island, named Drake’s Anchorage (built by the previous owner):

\n

Click to enlarge

\n

And, yes, there were beaches on the island (though I myself have never been much of a beach-goer):

\n

Click to enlarge

\n

And, in keeping with the Bond vibe, there was a seaplane too:

\n

Click to enlarge

\n

There was one house on the island, here pictured from the plane (it so happened that when I visited the island, I was learning to fly small planes myself—so I was interested in the plane):

\n

Click to enlarge

\n

Visiting a nearby island—with its very rundown airport sign—gives some sense of the overall area:

\n

Click to enlarge

\n

Ed claimed it was difficult to run the resort on his island, not least because, he said, “the British Virgin Islands have the lowest average worker productivity in the world”. But he nevertheless, for example, had a functioning restaurant, and here I am there in 1982, along with Charles Bennett, about whom we’ll hear more later:

\n

Click to enlarge

\n

When people talked about Ed, his island was often mentioned, and it projected a general image of overall mystique and extreme wealth. In 1983 a movie called WarGames came out, featuring a reclusive military-oriented computer expert named “Professor Falken”—who had an island. Many people assumed Falken was based on Fredkin (and it now says so all over the internet). However, in writing this piece, I decided to find out what was actually true—so I asked one of the writers of the movie, Walter Parkes. He responded, and, yes, fact is often even stranger than fiction:

\n
\n

Unfortunately I can confirm that Ed was not the inspiration for Stephen Falken. The character was inspired by Steven [sic] Hawking. (Falken = Falcon = Hawking) The movie was first conceived to be about two characters, a young super-genius born into a family incapable of acknowledging his gifts, and a dying scientist in need of a protégé. In the first several drafts Falken was confined to a wheel-chair and was working on understanding the big bang, for which he had created a computer simulation. Little known fact—while writing the character, we had one person in mind to play the role: John Lennon, who was murdered shortly before we finished the script.

\n
\n

(By the way, in a moment of “fact follows fiction”, WarGames featured a computer with lots of flashing lights. I happened to see the movie with Danny Hillis, and as we were walking out of the movie, I said to Danny “Perhaps your computer should have flashing lights too”. And indeed flashing lights became a signature feature of Danny’s Connection Machine computer, as later seen in movies like Jurassic Park.)

\n

Project MAC

\n

After he left III in 1968, Ed’s next stop would be MIT, and specifically Project MAC (the “Multiple Access Computer” Project). But actually Ed had already been involved much earlier with Project MAC. In many ways the project was a follow-on to what Ed had been doing at BBN on timesharing.

\n

In 1963 Ed wrote a long survey article on timesharing:

\n

Click to enlarge

\n

The introduction contains a rather charming window onto the view of computers at the time:

\n

Click to enlarge

\n

And the ads interspersed through the article give a further sense of the time:

\n

Click to enlarge

\n

As illustrations of what can be done with an interactive timeshared computer, there’s a picture from Ed’s vortex ocean simulation—as well as an example of an online “book” about LISP:

\n

Click to enlarge

\n

And, yes, already a kind of “cloud computing” story:

\n

Click to enlarge

\n

There’s also a description of Project MAC—that had just been funded by the Advanced Research Projects Agency (now DARPA). The article said that the “MAC” stood either for “Multiple Access Computer” or “Machine-Aided Cognition”. It included various sections on what might be possible with timesharing:

\n

Click to enlarge

\n

The main text of the article ends with a rousing (?) vision of AI taking over from humans (and, yes, even though this is from 60 years ago it’s not so different from what at least some people might say about the “AI future” today):

\n

Click to enlarge

\n

But there’s a curious piece of backstory to Project MAC—from 1961—that appears as a footnote to Ed’s article:

\n

Click to enlarge

\n

Ed told me versions of this story many times. McCarthy had failed to get tenure at MIT, and was looking for another job. (Yes, in retrospect this seems remarkable given all the things he’d already done by then. But those things were computer science—and MIT didn’t yet have a CS department; McCarthy was in the EE department.) Ed, Minsky and McCarthy were going to an SDC meeting in Los Angeles, and while he was out there McCarthy was going to interview at Caltech (his undergraduate alma mater). They had a free evening, and Ed suggested they meet “someone interesting”. Ed remembered Linus Pauling from his time at Caltech. But Pauling wasn’t in. So Minsky suggested they call Richard Feynman. And he was in, and invited them over to his house.

\n

Feynman apparently showed them things like his nanotech-inspiring tiny motor, etc., but somehow the discussion shifted to AI. And Minsky mentioned work a student of his was doing on the “AI problem” of symbolic integration. Then McCarthy started to explain ways a computer could do algebra. Then, as Ed told it to me in 2014:

\n
\n

Feynman produces this sheaf of papers to show us. It was all algebra. And he says “There’s a problem. I’ve done this calculation, and it’s close to 50 pages. A graduate student has done it too, and Murray Gell-Mann has done it. And the only thing we know for sure is that our three results are mutually inconsistent. And the only conclusion we can arrive at is that a person can’t do this much algebra with the hope of getting it right.” And so the question was could there be some system that could help do a problem like that? So what happened is Marvin [Minsky] and I basically fleshed out the idea of a mathematical thing. And it was agreed that we would do it. Marvin and I decided to divide this task up, that I would do one part, and he would do another. Now, we had one bad idea in there, OK. It’s partly Feynman’s fault, but it’s also Marvin and my fault. He was convinced you could not do [math] by typing it. It had to have some kind of handwriting recognition. So, it was decided I would do the handwriting recognition…

\n
\n

And although I didn’t know this until I was writing this piece, it turns out the original proposal for Project MAC was actually based on the idea of building a system for mathematics, and “Project MAC” was originally the “Project on Mathematics and Computation”. Pretty soon, though, the emphasis of Project MAC would shift to the “infrastructure” of timeshared computing. But there was still a math effort, which in time became the MACSYMA system for computer algebra (written in LISP by students and grandstudents of Minsky).

\n

And here this intersects with my personal story. Because many years later (starting in 1976) I would use that system—along with other early computer algebra systems—to do all sorts of physics calculations. My archives still contain an example of what it was like in 1980 to log in to “Project MAC” over the ARPANET (my username was “swolf” in those days; note the system message, the presence of 15 MITishly-named “lusers” altogether, and yes, mail):

\n

Click to enlarge

\n

But, actually, in late 1979 I had already decided to “do my own thing” and build my own system for doing mathematical computation, and eventually much more. And indeed when I first met Ed in 1982 I had recently finished the first version of SMP, and to commercialize it I had started my first company. In 1986 I started to build Mathematica (and what’s now Wolfram Language)—which was released in 1988. Ed started using Mathematica very soon after it was released, and basically continued to do so for the rest of his life.

\n

But picking up the original Project MAC narrative from 1963: the old group from BBN had dispersed but were still writing together about timesharing (and when they said a “debugging system” they meant essentially what we would now call an operating system):

\n

Click to enlarge

\n

And when Project MAC launched in 1963, its “steering committee” included Minsky, Gurley—and Ed. (John McCarthy had landed at Stanford, where he would remain for the rest of his life. I first met him in 1981, at a time when Stanford was trying to recruit me. There was a lunch with the CS department; people went around the room and introduced themselves. McCarthy unhelpfully—and confusingly—said he was “John Smith”.)

\n

Ed at MIT

\n

In 1968, Ed left III—and Minsky, together with Licklider (who had by then become director of Project MAC), persuaded the MIT EE department to hire Ed as a visiting professor for the year. Ed had been spending most of his time at III in Los Angeles, but III also had a pied-à-terre in the Boston area, and indeed its IPO documents listed its address as 545 Technology Square, Cambridge—the very building in which Project MAC was located.

\n

At MIT, Ed invented and taught a freshman course on “Problem Solving”. He told me many times one of his favorite “problem exercises”. Imagine there’s a person who can cure anyone who’s sick just by touching them. How could one set things up to make the best use of this? I must say I never find such implausible hypotheticals terribly interesting. But Ed was proud of a solution that he’d come up with (I think in discussion with Minsky and McCarthy) that involved systematically shuttling millions of people past the healer.

\n

This probably didn’t come from that particular course, but here are some notes I found in an archive of Ed’s papers at MIT that perhaps suggest some of the flavor of the course (we’ll talk about Ed’s interest in the Soviet Union later):

\n

Click to enlarge

\n

In 1968 MIT—and Project MAC in particular—was at the very center of emerging ideas about computer science and AI. A picture from that time captures Ed (third from left) with a few of the people involved: Claude Shannon, John McCarthy and Joe Weizenbaum (creator of ELIZA, the original chatbot):

\n

Click to enlarge

\n

At the end of the 1968 academic year student reviews from Ed’s course were unexpectedly good, and MIT needed faculty members who could be principal investigators on the government grants that were becoming plentiful for computing—and one of those typical-for-Ed “surprising things” happened: MIT agreed to hire him as a full professor with tenure, despite his lack of academic qualifications. It was a watershed moment for Ed, and I think a piece of validation that he carried with pride for the rest of his life. (For what it’s worth, while Ed was an extreme case, MIT was at that time also hiring at least some other people without the usual PhD qualifications into CS professor positions.)

\n

In 1971 Licklider stepped down from his position as director of Project MAC—and Ed assumed the position. His archives from the time contain lots of administrative material—studies, reports, proposals, budgets, etc.—including many pieces reflecting things like the birth of the ARPANET, the maturing of operating systems and the general enthusiasm about the promise of AI.

\n

One item (conceivably from an earlier time) is Ed’s summary of “Information Processing Terminology” for PDP-1 users, complete with definitions like: “A bit is a binary digit or any thing or state that represents a binary digit. Equivalently, a bit is a set with exactly two members. Note that a bit is not one of the members of such a set”:

\n

Click to enlarge

\n

Ed does not seem to have been very central to the intellectual activities around Project MAC, and the emerging Lab for Computer Science and AI Lab. But his name shows up from time to time. And, for example, in the classic “HAKMEM” collection of 191 math and CS “hacks” from the AI Lab, there are two—both very number oriented—attributed to Ed:

\n

Click to enlarge

\n

Rollo Silver gets mentioned too—notably in connection with “random number generators” involving XORs (and, yes, the code is assembly code—for a PDP-10):

\n

Click to enlarge

\n

Also in HAKMEM is the “munching squares” algorithm—that I was later shown by Bill Gosper:

\n

Click to enlarge

\n

And talking of Gosper (whom I’ve known since 1979, and who almost every week seems to send me mail with a surprising new piece of math he’s found with Mathematica): in 1970 the Game of Life cellular automaton had come on the scene, and Gosper and others at MIT were intensely studying it, with Gosper triumphantly discovering the glider gun in November 1970. Curiously—in view of all his emphasis on cellular automata—Ed doesn’t seem to have been involved.

\n

But he did do other things. In 1972, for example, as a kind of spinoff from his Problem Solving course, he formed a group called “The Army to End the War” (i.e. the Vietnam War), whose idea was that it was time to stop the government fighting an unwinnable war, and this could be achieved by having an organization that would coordinate citizens to threaten a run on banks unless the war was ended. Needless to say, though, this didn’t really fit well with the project Ed ran being funded by the Department of Defense.

\n

Between MIT being what it is, and Ed being who he was, there were often strange things that happened. As Ed tells it, one day he was in Marvin Minsky’s office talking about unrecognized geniuses, and a certain Patrick Gunkel walks in, and identifies himself as such. Ed ended up having a long association with Gunkel, who produced such documents as:

\n

Click to enlarge

\n

(Gunkel’s major goal was to create what he called “ideonomy”, or the “science of ideas”, with divisions like isology, chorology, morology and crinology. I met Gunkel once, in Woods Hole, where he had become something of a local fixture, riding around town with his cat in his bicycle basket.)

\n

But after a few years as director of Project MAC, in 1974 Ed was onto something new: being a visiting scholar at Caltech. After his 1961 encounter, he had gotten to know Richard Feynman—who always enjoyed spending time with “out of the box” people like Ed. And so in 1974 Ed went for a year to Caltech, to be with Feynman.

\n

The Universe as a Cellular Automaton

\n

My own efforts (and successes) with cellular automata may perhaps have had something to do with it. But I think at least in the later part of his life, Ed felt his greatest achievements related to cellular automata and in particular his idea that the universe is a giant cellular automaton. I’m not sure when Ed really first hatched this idea, or indeed started to think about cellular automata. Ed had told me many times that when he’d told John McCarthy “the idea”, McCarthy suggested testing it by looking for “roundoff error” in physics, analogous to roundoff error from finite precision in computers. Ed scoffed at this, accusing McCarthy of imagining that there was literally “an IBM 709 computer in the sky”. And Ed’s implication was that he had gotten further than that, imagining the universe to be made more abstractly from a cellular automaton.

\n

I didn’t know quite when this exchange with McCarthy was supposed to have taken place (and, by the way, some of the emerging experimental implications of our Physics Project are precisely about finding evidence of discrete space through something quite analogous to “roundoff errors” in the equations for spacetime). But Ed’s implication to me was always that he’d started exploring cellular automata sometime before 1960.

\n

In the mid-1990s, researching history for my book A New Kind of Science, (as I’ll discuss below) I had a detailed email exchange and long phone conversation with Ed about this. The result was a statement in my notes about the history of cellular automata:

\n

Click to enlarge

\n

At the time, Ed made it sound very convincing. But in writing this piece, I’ve come to the conclusion it’s almost certainly not correct. And of course that’s disappointing given all the effort I put into the history notes in my book, and the almost complete lack of other errors that have surfaced even after two decades of scrutiny. But in any case, it’s interesting to trace the actual development of Ed’s ideas.

\n

One useful piece of evidence is a 25-page document from 1969 in his archives, entitled “Thinking about New Things”—that seems to outline Ed’s thinking at the time. Ed explains “I am not a Physicist, in fact I know very little about modern physics”—but says he wants to suggest a new way of thinking about physics:

\n

Click to enlarge

\n

Soon he starts talking about the possibility that the universe is “merely a simulation on a giant computer”, and relates a version of what he told me about his interaction with John McCarthy:

\n

Click to enlarge

\n

He talks (in a rather programmer kind of way) about the beginning of the universe:

\n

Click to enlarge

\n

He goes on—again in a charmingly “programmer” way:

\n

Click to enlarge

\n

A bit later, Ed is beginning to get to the concept of cellular automata:

\n

Click to enlarge

\n

And there we have it: Ed gets to (3D) cellular automata, though he calls them “spatial automata”:

\n

Click to enlarge

\n

And now he claims that spatial automata can exhibit “very complex behavior”—although his meaning of that will turn out to be a pale shadow of what I discovered in the early 1980s with things like rule 30:

\n

Click to enlarge

\n

But at this point Ed already seems to think he’s almost there—that he’s almost reproduced physics:

\n

Click to enlarge

\n

A little later he’s discussing doing something very much in my style: enumerating possible rules:

\n

Click to enlarge

\n

And still further on he actually talks about 1D rules. And in some sense it might seem like he’s getting very close to what I did in the early 1980s. But his approach is very different. He’s not doing “science” and “empirically seeing what cellular automata do”. Or even being very interested in cellular automata for their own sake. Instead, he’s trying to engineer cellular automata that can “be the universe”. And so for example he wants to consider only left-right symmetric cellular automata “because the universe is isotropic”. And having also decided he wants cellular automata that are symmetric under interchange of black and white (a property he calls “syntactic symmetry”), he ends up with just 8 rules. He could just have simulated these by running them on a computer. But instead he tries to “prove” by pure thought what the rules will do—and comes up with this table:

\n

Click to enlarge

\n

Had he done simulations he might have made pictures like these (labeled using my rule-numbering scheme):

\n
\n
\n

\n

But as it was he didn’t really come to any particular conclusion, other than what amount to a few simple “theorems” about what “data processing” these cellular automata can do:

\n

Click to enlarge

\n

I must say I find it very odd that—particularly given all the stories about his activities and achievements he told me—Ed never in the four decades I knew him mentioned anything about having thought about 1D cellular automata. Perhaps he didn’t remember, or perhaps—even after everything I wrote about them—he never really knew that I was studying 1D cellular automata.

\n

But in any case, what comes next in the 1969 document is Ed getting back to “pure thought” arguments about how cellular automata might “make physics”:

\n

Click to enlarge

\n

It’s a bit muddled (though, to be fair, this was a document Ed never published), but at the end it’s basically saying that if the universe really is just a cellular automaton then one should be able to replace physical experiments (that would, for example, need particle accelerators) with “digital hardware” that just runs the cellular automaton. The next section is entitled “The Design of a Simulator”, and discusses how such hardware could be constructed, concluding that a 1000×1000×1000 3D grid of cells could be built for $50M (or nearly half a billion dollars today).

\n

After that, there’s one final (perhaps unfinished) section that reads a bit like a caricature of “I’ve-got-a-theory-of-physics-too” mechanical models of physics:

\n

Click to enlarge

\n

But, OK, so what does this all mean? Well, first, I think it makes it rather clear that (despite what he told me) by 1969—let alone 1961—Ed hadn’t actually implemented or run cellular automata in any serious way. It’s also notable that in this 1969 piece Ed isn’t using the term “cellular automaton”. The concept of cellular automata had been invented many times, under many different names. But by 1969 the term “cellular automaton” was pretty firmly established, and in fact 1969 might have represented the very peak up to that point of interest in cellular automata in the world at large. But somehow Ed didn’t know about this—or at least wasn’t choosing to connect with it.

\n

Even at MIT Frederick Hennie in the EE department had actually been studying cellular automata—albeit under the name “iterative arrays”—since the very beginning of the 1960s. In 1968 E. F. Codd from IBM (who laid the foundations for SQL—and who worked with Ed’s friend John Cocke) had published a book entitled Cellular Automata. Alvy Ray Smith—in the same department as John McCarthy at Stanford—was writing his PhD thesis on “cellular automata”. In 1969 Marvin Minsky and Seymour Papert published their Perceptrons book, and were apparently talking a lot about cellular automata. And for example by the fall of 1969 Papert’s student Terry Beyer had written a thesis about the “recognition and transformation of figures by iterative arrays of finite state automata”—under the auspices of Project MAC, presumably right under Ed’s nose. (And, no, the thesis doesn’t mention Ed, though it mentions Minsky.)

\n

Right around that time, though, something happens. Ed had been convinced—probably by Minsky and McCarthy—that any cellular automaton capable of “being the universe” better be computation universal. And now there’s a student named Roger Banks who’s working on seeing what kind of (2D) cellular automaton would be needed to get computation universality. Banks had found examples requiring much fewer than the 29 states von Neumann and Burks had used in the 1950s. But—as he related to me many times—Ed challenged Banks to find a 2-state example (“implementable purely with logic gates”), and Banks soon found it, first describing it in June 1970:

\n

Click to enlarge

\n

Banks had apparently been interacting with the “Life hackers” at MIT, and in November 1970 some of the thunder of his result was stolen when Bill Gosper at MIT discovered the glider gun, which suggested that even the rules of the Game of Life (albeit involving 9 rather than 5 2D neighbors) were likely to be sufficient for computation universality.

\n

But for our efforts to trace history, Banks’s June 1970 report has a number of interesting elements. It relates the history of cellular automata, without any mention of Ed. But then—in its one mention of Ed—it says:

\n

Click to enlarge

\n

The “mod-2 rule” that Ed told me he’d simulated in 1961 has finally made an appearance. In an oral history years later Terry Winograd reported that in 1970 he “went to a lecture of Papert’s in which he described a conjecture about cellular automata [which Winograd] came back with a proof of”.

\n

By January 1971, Banks is finishing his thesis, which is now officially supervised by Ed (even though it’s nominally in the mechanical engineering department):

\n

Click to enlarge

\n

Most of Banks’s work is presented as what amount to “engineering drawings”, but he mentions that he has done some simulations. I don’t know if these included simulations of the mod-2 rule but it seems likely.

\n

So was 1969 or 1970 the first time the mod-2 rule had been heard from? I’m not sure, but I suspect so. But to confuse things there’s a “display hack” known as “munching squares” (described in HAKMEM) that looks in some ways similar, and that was probably already seen in 1962 on the PDP-1. Here are the frames in a small example of munching squares:

\n
\n
\n

\n

Here’s a video of a bigger example:

\n
\n
\n

\n

I expect Ed saw munching squares, perhaps even in 1962. But it’s not the mod-2 rule—or actually a cellular automaton at all. And even though Ed certainly had the capability to simulate cellular automata back at the beginning of the 1960s (and could even have recorded videos of 2D ones with III’s film technology) the evidence we have so far is that he didn’t. And in fact my suspicion is that it was probably only around the time I met Ed in 1982 when it finally happened.

\n

My First Encounter with Ed

\n

In May 1981 there’d been a conference at MIT on the Physics of Computation. I’d been invited, but in the end I couldn’t go—because (in a pattern that has repeated many times in my life) it coincided with the initial release of my SMP software system. Still, in December 1981 I got the following invitation:

\n

Click to enlarge

\n

In January 1982 I was planning to go to England to do a few weeks of intensive SMP development on a computer that a friend’s startup had—and I figured I would go to the Caribbean “on the way”.

\n

It was an interesting group that assembled on January 18, 1982, on Mosquito Island. It was the first time I met my now-longtime friend Greg Chaitin. There were physicists there, like Ken Wilson and David Finkelstein. (Despite the promise of the invitation, Feynman’s health prevented him from coming.) And then there were people who’d worked on reversible computation, like Rolf Landauer and Charles Bennett. There were Tom Toffoli and Norm Margolus, who had their cellular automaton machine with them. And finally there was Ed. At first he seemed a little Gatsby-like, watching and listening, but not saying much. I think it was the next morning that Ed pulled me aside rather conspiratorially and said I should come and see something.

\n

Click to enlarge

\n

There was just one real house (as opposed to cabin) on the island (with enough marble to clinch the Bond-villain-lair vibe). Ed led me to a narrow room in the house—where there was a rather-out-of-place-for-a-tropical-island modern workstation computer. I’d seen workstation computers before; in fact, the company I’d started was at the time (foolishly) thinking of building one. But the computer Ed had was from a company he was CEOing. It was a PERQ 1, made by Three Rivers Computer Corporation, which had been founded by a group from CMU including McCarthy’s former student Raj Reddy. I learned that Three Rivers was a company in trouble, and that Ed had recently jumped in to save it. I also learned that in addition to any other challenges the engineers there might have had, he’d added the requirement that the PERQ be able to successfully operate on a tropical island with almost 100% humidity.

\n

But in any case, Ed wanted to show me something on the screen. And here’s basically what it was:

\n
\n
\n
\n

Ed pressed a button and now this is what happened:

\n
\n
\n

\n

I’d seen plenty of “display hacks” before. Bill Gosper had shown me ones at Xerox PARC back in 1979, and my archives even contain some of the early color laser printer outputs he gave me:

\n

Click to enlarge

\n

I don’t remember the details of what Ed said. And what I saw looked like “display hacks flashing on the screen”. But Ed also mentioned the more science-oriented idea of reversibility. And I’m pretty sure he mentioned the term “cellular automaton”. It wasn’t a long conversation. And I remember that at the end I said I’d like to understand better what he was showing me.

\n

And so it was that Ed handed me a PERQ 8” floppy disk. And now, 41 years later, here it is, sitting— still unread—in my archives:

\n

Click to enlarge

\n

It’s not so easy these days to read something like this—and I’m not even sure it will have “magnetically survived”. But fortunately—along with the floppy—there’s something else Ed gave me that day. Two copies of a 9-page printout, presumably of what’s on the floppy:

\n

Click to enlarge

\n

And what’s there is basically a Pascal program (and the PERQ was a very Pascal-oriented machine; “PERQ” is said to have stood for “Pascal Engine that Runs Quicker”). But what does the program do? The main program is called “CA1”, suggesting that, yes, it was supposed to do something with cellular automata.

\n

There are a few comments:

\n

Click to enlarge

\n

And there’s code for making help text:

\n

Click to enlarge

\n

Apparently you press “b” to “clear the Celluar [sic] Automata boundary”, “n” for “Fredkin’s Pattern” and “p” for “EF1”. And at the end there’s a reference to munching squares. The first pattern above is what you get by pressing “n”; the second by pressing “p”.

\n

Both patterns look pretty messy. But if instead you press “a”, you get something with a lot more structure:

\n
\n
\n

\n

I think Ed showed this to me in passing. But he was more interested in the more complicated patterns, and in the fact that you could get them to reverse what they were doing. And in this animated form, I suspect this just looked to me like another munching squares kind of thing.

\n

But, OK, given that we have the program, can we tell what it actually does? The core of it is a bunch of calls to the function rasterop(). Functions like rasterop() were common in computers with bitmapped displays. Their purpose was to apply a certain Boolean operation to the array of black and white pixels in a region of the screen. Here it’s always rasterop(6, …) which means that the function being applied is Boolean function 6, or Xor (or “sum mod 2”).

\n

And what’s happening is that chunks of the screen are getting Xor’ed together: specifically, chunks that are offset by one pixel in each of the four directions. And this is all happening in two phases, swapping between different halves of the framebuffer. Here are the central parts of the sequence of frames that get generated starting from a single cell:

\n
\n
\n

\n

It helps a lot to see the separate frames explicitly. And, yes, it’s a cellular automaton. In fact, it’s exactly the “reversible mod-2 rule”. Here it is for a few more steps, with its simple “self-reproduction” increasingly evident:

\n
\n
\n

\n

Back in 1982 I think I only saw the PERQ that one time. But in one of the resort cabins on the other side of the island—there was this (as captured in a slightly blurry photograph that I took):

\n

Click to enlarge

\n

It was a “cellular automaton machine” built out of “raw electronics” by Tom Toffoli and Norm Margolus—who were the core of Ed’s “Information Mechanics” group at MIT. It didn’t feel much like science, but more like a video DJ performance. Patterns flashing and dancing on the screen. Constant rewiring to produce new effects. I wanted to slow it all down and “sciencify” it. But Tom and Norm always wanted to show yet another strange thing they’d found.

\n

Looking in my archives today, I find just one other photograph I took of the machine. I think I considered this the most striking pattern I saw the machine produce. And, yes, presumably it’s a 2D cellular automaton—though despite my decades of experience with cellular automata I don’t today immediately recognize it:

\n

Click to enlarge

\n

What did I make of Ed back in 1982? Remember, those were days long before the web, and before one could readily look up people’s backgrounds. So pretty much all I knew was that Ed was connected to MIT, and that he owned the island. And I had the impression that he was some kind of technology magnate (and, yes, the island and the plane helped). But it was all quite mysterious. Ed didn’t engage much in technical conversations. He would make statements that were more like pronouncements—that sounded interesting, but were too vague and general for me to do much more than make up my own interpretations for them. Sometimes I would try to ask for clarification, but the response was usually not an explanation, but instead a tangentially related—though often rather engaging—story.

\n

All these years later, though, one particular exchange stands out in my memory. It was at the end of the conference. We were standing around in the little restaurant on the island, waiting for a boat to arrive. And Ed said out of the blue: “I’ll make a deal with you. You teach me how to write a paper and I’ll teach you how to build a company.” At the time, this struck me as quite odd. After all, writing papers seemed easy to me, and I assumed Ed was doing it if he wanted to. And I’d already successfully started a company the previous year, and didn’t think I particularly needed help with it. (Though, yes, I made plenty of mistakes with that company.) But that one comment from Ed somehow for years cemented my view of him as a business tycoon who didn’t quite “get” science, though had ideas about it and wanted to dabble in it.

\n

Ed and Feynman

\n

Ed would later describe Richard Feynman as his best friend. As we discussed above, they’d first met in 1961, and in 1974 Ed had spent the year at Caltech visiting Feynman, having, as Ed tells it, made a deal (analogous to the one he later proposed to me) that he would teach Feynman about computers, and Feynman would teach him about physics. I myself first got to know Feynman in 1978, and interacted extensively with him not only about physics, but also about symbolic computing—and cellular automata. And in retrospect I have to say I’m quite surprised that he mentioned Ed to me only a few times in passing, and never in detail.

\n

But I think the point was that Feynman and Ed were—more than anything else—personal friends. Feynman tended to find “traditional academics” quite dull, and much preferred to hang out with more “unusual” people—like Ed. Quite often the people Feynman hung out with had quite kooky ideas about things, and I think he was always a little embarrassed by this, even though he often seemed to find it fun to indulge and explore those ideas.

\n

Feynman always liked solving problems, and applying himself to different kinds of areas. But I have to say that even I was a little surprised when in writing this piece I was going through the archives of Ed’s papers at MIT, and found the following letter from Feynman to Ed:

\n

Click to enlarge

\n

Clearly he—like me—viewed Ed as an authority on business. But what on earth was this “cutting machine”, and why was Feynman trying to sell it?

\n

For what it’s worth, the next couple of pages tell the story:

\n

Click to enlarge

\n

Feynman’s next-door neighbor had a company that made swimwear, and this was a machine for cutting the necessary fabric—and Feynman had helped develop it. And much as Feynman had been prepared to help his neighbor with this, he was also prepared to help Ed with some of his ideas about physics. And in the archive of Ed’s papers, there’s a letter from Feynman:

\n

Click to enlarge

\n

I don’t know whether this is the first place the term “Fredkin gate” was ever used. But what’s here is a quintessential example of Feynman diving into some new subject, doing detailed calculations (by hand) and getting a useful answer—in this case about what would become Ed’s best-known invention: reversible logic, and the Fredkin gate.

\n

Feynman had always been interested in “computing”. And indeed when he was recruited to the Manhattan Project it was to run a team of human computers (equipped with mechanical desk calculators). I think Feynman always hoped that physics would “become computational” at least in some sense—and he would for example lament to me that Feynman diagrams were such a bad way to compute things. Feynman always liked the methodology of traditional continuous mathematics, but (as I just noticed) even in 1964 he was saying that “I believe that the theory that space is continuous is wrong, because we get these infinities and other difficulties…”. And elsewhere in his 1964 lectures that became The Character of Physical Law Feynman says:

\n

Click to enlarge

\n

Did Feynman say these things because of his conversations with Ed? I rather doubt it. But as I was writing this piece I learned that Ed thought differently. As he told it:

\n
\n

I never pressed any issue that would sort of give me credit, okay? It’s just my nature. A very weird thing happened toward the end of my time at Caltech. Richard Feynman and I would get into very fierce arguments. . . . I’m trying to convince him of my ideas, that at the bottom is something finite and so on. He suddenly says to me, “You know, I’m sure I had this same idea sometime quite a while ago, but I don’t remember where or how or whether I ever wrote it down.” I said, “I know what you’re talking about. It’s a set of lectures you gave someplace. In those lectures you said perhaps the world is finite.” He just has this little statement in this book. I saw the book on his shelf. I got it out, and he was so happy to see that there. What I didn’t tell him was he gave that lecture years after I’d been haranguing him on this subject. I knew he thought it was his idea, and I left it that way. That was just my nature.

\n
\n

Notwithstanding what he said, I rather suspect he did push the point. And for example when Feynman gave a talk on “Simulating Physics with Computers” at the 1981 MIT Physics of Computation conference that Ed co-organized, he was careful to write that:

\n

Click to enlarge

\n

Ed, by the way, arranged for Feynman to get his first personal computer: a Commodore PET. I don’t think Feynman ended up using it terribly much, though in 1984 he took it with him on a trip to Hawaii where he and his son Carl used it to work out probabilities to try to “crack” the randomness of my rule 30 cellular automaton (needless to say, without success).

\n

Digital Physics & Reversible Logic

\n

Back at MIT in 1975 after his year at Caltech, Ed was no longer the director of Project MAC, but was still on the books as a professor, albeit something of an outcast one. Soon, though, he was teaching a class about his ideas—under the title of “Digital Physics”:

\n

Click to enlarge

\n

Cellular automata weren’t specifically mentioned in the course description—though in the syllabus they were there, with the Game of Life as a key example:

\n

Click to enlarge

\n

Back in the 1960s, cellular automata had been a popular topic in theoretical computer science. But by the mid-1970s the emphasis of the field had switched to things like computational complexity theory—and, as Ed told me many times, his efforts to interest people at MIT in cellular automata failed, with influential CS professor Albert Meyer (whose advisor Patrick Fischer had worked quite extensively on cellular automata) apparently telling Ed that “one can tell someone is out of it if they don’t think cellular automata are dead”. (It’s an amusing irony that around this time, Meyer’s future wife Irene Greif would point John Moussouris—who we’ll meet later—to Ed and his work on cellular automata.)

\n

Ed’s ideas about physics were not well received by the physicists at MIT. And for example when students from Ed’s class asked the well-known MIT physics professor Philip Morrison what he thought of Ed’s approach, he apparently responded that “Of course Fredkin thinks the universe is a computer—he’s a computer person; if instead he were a cheese merchant he’d think it was a big cheese!”

\n

When Ed was at Caltech in 1974 a big focus there—led by Carver Mead—was VLSI design. And this led to increasing interest in the ultimate limits on computation imposed by physics. Ever since von Neumann in the 1950s it had been assumed that every step in a computation would necessarily require dissipation of energy—and this was something Carver Mead took as a given. But if this was true, how could Ed’s cellular automaton for the universe work? Somehow, Ed reasoned, it—and any computation, for that matter—had to be able to run reversibly, without dissipating any energy. And this is what led Ed to his most notable scientific contribution: the idea of reversible logic.

\n

Ordinary logic operations—like And and Or—take two bits of input and give one bit of output. And this means they can’t be reversible: with only one bit in the output there isn’t information to uniquely determine the two bits of input from the output. But if—like Ed—you consider a generalized logic operation that for example has both two inputs and two outputs, then this can be invertible, i.e. reversible.

\n

The concept of an invertible mapping had long existed in mathematics, and under the name “automorphisms of the shift” had even been studied back in the 1950s for the case of what amounted to 1D cellular automata (for applications in cryptography). And in 1973 Charles Bennett had shown that one could make a reversible analog of a Turing machine. But what Ed realized is that it’s possible to make something like a typical computer design—and have it be reversible, by building it out of reversible logic elements.

\n

Looking through the archive of Ed’s papers at MIT, I found what seem to be notes on the beginning of this idea:

\n

Click to enlarge

\n

And I also found this—which I immediately recognized as a sorting network, in which values get sorted through a sequence of binary comparisons:

\n

Click to enlarge

\n

Sorting networks are inevitably reversible. And this particular sorting network I recognized as the largest guaranteed-optimal sorting network that’s known—discovered by Milton Green at SRI (then “Stanford Research Institute”) in 1969. It’s implausible that Ed independently discovered this exact same network, but it’s interesting that he was drawing it (by hand) on a piece of paper.

\n

Ed’s archives also contain a 3-page draft entitled “Conservative Logic”:

\n

Click to enlarge

\n

Ed explains that he is limiting himself to gates that implement permutations

\n

Click to enlarge

\n

and then goes on to construct a “symmetric-majority-parity” gate—which he claims is “computation universal”:

\n

Click to enlarge

\n

It’s not quite a Fredkin gate, but it’s close. And, by the way, it’s worth pointing out that these gates alone aren’t “computation universal” in something like the Turing sense. Rather, the point is that—like with Nand for ordinary logic—any reversible logic operation (i.e. permutation) with any number of inputs can be constructed using just these gates, connected by wires.

\n

Ed didn’t at first publish anything about his reversible logic idea, though he talked about it in his class, and in 1978 there were already students writing term papers about it. But then in 1978, as Ed told it later:

\n
\n

I found this guy Tommaso Toffoli. He had written a paper that showed how you could build a reversible computer by storing everything that an ordinary computer would have to forget. I had figured out how to have a reversible computer that didn’t store anything because all the fundamental activity was reversible. Okay? So I decided to hire him because he was the only person who tried to do it and he didn’t succeed, really, and I had—and I hired him to help me.

\n
\n

Toffoli had done a first PhD in Italy building electronics for cosmic ray detectors, and in 1978 he’d just finished a second PhD, working on 2D cellular automata with Art Burks (who had coined the name “cellular automaton”). Ed brought Toffoli to MIT under a grant to build a cellular automaton machine—leading to the machine I saw on Ed’s island in 1982. But Ed also worked with Toffoli to write a paper about conservative logic—which finally appeared in 1982, and contained both the Fredkin gate, and the Toffoli gate. (Ed later griped to me that Toffoli “really hadn’t done much” for the paper—and that after all the Toffoli gate was just a special case of the Fredkin gate.)

\n

Back in 1980—on the way to this paper—Ed, with Feynman’s encouragement, had had another idea: to imagine implementing reversible logic not just abstractly, but through an explicit physical process, namely collisions between elastic billiard balls. And as we saw above, Feynman quickly got into analyzing this, for example seeing how a Fredkin gate could be implemented just with billiard balls.

\n

But ultimately Ed wanted to implement reversibility not just for things like circuits, but also—imitating the reversibility that he believed was fundamental to physics—for cellular automata. Now the fact is that reversibility for cellular automata had actually been quite well studied since the 1950s. But I don’t think Ed knew that—and so he invented his own way to “get reversibility” in cellular automata.

\n

It came from something Ed had seen on the PDP-1 back in 1961. As Ed tells it, in playing around with the PDP-1 he had come up with a piece of code that surprised him by drawing something close to a circle in pixels on the screen. Minsky had apparently “gone into the debugger” to see how it worked—and in 1972 HAKMEM attributed the algorithm to Minsky (though in the Pascal program I got from Ed in 1982, it appears as a function called efpattern()). Here’s a version of the algorithm:

\n
\n
\n

\n
\n
\n

\n

And, yes, with different divisors d it can give rather different (and sometimes wild) results:

\n
\n
\n

\n

But for our purposes here what’s important is that Ed found out that this algorithm is reversible—and he realized that in some sense the reason is that it’s based on a second-order recurrence. And, once again, the basic ideas here are well known in math (cf. reversibility of the wave equation, which is second order). But Ed had a more computational version: a second-order cellular automaton in which one adds mod 2 the value of a cell two steps back. And I think in 1982 Ed was already talking about this “mod-2 trick”—and perhaps the PERQ program was intended to implement it (though it didn’t).

\n

Ed’s work on reversible logic and “digital physics” in a sense came to a climax with the 1981 Physics of Computation conference at MIT—that brought in quite a Who’s Who of people who’d been interested in related topics (as I mentioned above, I wasn’t there because of a clash with the release of SMP Version 1.0, though I did meet or at least correspond with most of the attendees at one time or another):

\n

Click to enlarge

\n

Originally Ed wanted to call the conference “Physics and Computation”. But Feynman objected, and the conference was renamed. In the end, though, Feynman gave a talk entitled “Simulating Physics with Computers”—which most notably talked about the relation between quantum mechanics and computation, and is often seen as a key impetus for the development of quantum computing. (As a small footnote to history, I worked with Feynman quite a bit on the possibility of both quantum computing and quantum randomness generation, and I think we were both convinced that the process of measurement was ultimately going to get in the way—something that with our Physics Project we are finally now beginning to be able to analyze in much more detail.)

\n

But despite his interactions with Feynman, Ed was never too much into the usual ideas of quantum mechanics, hoping (as he said in the flyer for his course on digital physics) that perhaps quantum mechanics would somehow fall out of a classical cellular-automaton-based universe. But when quantum computing finally became popular in the 1990s, reversible logic was a necessary feature, and the Fredkin gate (also known as CSWAP or “controlled-swap”) became famous. (The Toffoli gate—or CCNOT—is a bit more famous, though.)

\n

In tracing the development of Ed’s ideas, particularly about “digital physics”, there’s another event worthy of mention. In late 1969 Ed learned about an older German tech entrepreneur named Konrad Zuse who’d published an article in 1967 (and a book in 1969) on Rechnender Raum (Calculating Space)—mentioning the term “cellular automata”:

\n

Click to enlarge

\n

Although Zuse was 24 years older than Ed, there were definitely similarities between them. Zuse had been very early to computers, apparently building one during World War II that suffered an air raid (and may yet still lie buried in Berlin). After the war, Zuse started a series of computer companies—and had ideas about many things. He’d been trained as an engineer, and perhaps it was having worked on solving his share of PDEs using finite differences that led him to the idea—a bit like Ed’s—that space might fundamentally be a discrete grid. But unlike Ed, Zuse for the most part seemed to think that—as with finite differences—the values on the grid should be continuous, or at least integers. Ed arranged for Zuse’s book to be translated into English, and for Zuse to visit MIT. I don’t know how much influence Zuse had on Ed, and when Ed talked to me about Zuse it was mostly just to say that people had treated his ideas—like Ed’s—as rather kooky. (I exchanged letters with Zuse in the 1980s and 1990s; he seemed to find my work on cellular automata interesting.)

\n

Ideas & Inventions Galore

\n

It wasn’t just physics that Ed had ideas about. It was lots of other things too. Sometimes the ideas would turn into businesses; more often they’d just stay as ideas. Ed’s archive, for example, contains a document on the “Intermon Idea” that Ed hoped would “provide a permanent solution to the world’s problem of not having a stable medium of exchange”:

\n

Click to enlarge

\n

And, no, Ed wasn’t Satoshi Nakamoto—though he did tell me several times that (although, to his displeasure, it was never acknowledged) he had suggested to Ron Rivest (the “R” of RSA cryptography) the idea of “using factoring as a trapdoor”. And—not content with solving the financial problems of the world, or, for that matter, fundamental physics—Ed also had his “algorithmic plan” to prevent the possibility of World War III.

\n

And then there was the Muse. Marvin Minsky had long been involved with music, and had assembled out of electronic modules a system that generated sequences of musical notes. But in 1970 Ed and Minsky developed what they called the Muse—whose idea was to be a streamlined system that would use integrated circuits to “automatically compose music”:

\n

Click to enlarge

\n

In actuality, the Muse produced sequences of notes determined by a linear feedback shift register—in essence a 1D additive cellular automaton—in which the details of the rule were set on its front panel as “themes”. The results were interesting—if rather R2-D2-like—but weren’t what people usually thought of as “music”. Ed and Minsky started a company named Triadex (note the triangular shape of the Muse), and manufactured a few hundred Muses. But the venture was not a commercial success.

\n

Particularly through interacting with Minsky, Ed was quite involved in “things that should be possible with AI”. The Muse had been about music. But Ed also for example thought about chess—where he wanted to build an array of circuits that could tree out possible moves. Working with Richard Greenblatt (who had developed an earlier chess machine) my longtime friend John Moussouris ended up designing CHEOPS (a “Chess-Oriented Processing System”) while Ed was away at Caltech. (Soon thereafter, curiously enough, Moussouris would go to Oxford and work with Roger Penrose on discrete spacetime—in the form of spin networks. Then in later years he would found two important Silicon Valley microprocessor companies.)

\n

Keeping on the chess theme, Ed would in 1980 (through his Fredkin Foundation) put up the Fredkin Prize for the first computer to beat a world champion at chess. The first “pre-prize” of $5k was awarded in 1981; the second pre-prize of $10k in 1988—and the grand prize of $100k was awarded in 1997 with some fanfare to the IBM Deep Blue team.

\n

Ed also put up a prize for “math AI”, or, more specifically, automated theorem proving. It was administered through the American Math Society and a few “milestone prizes” were given out. But the grand Leibniz Prize “for the proof of a ‘substantial’ theorem in which the computer played a major role” was never claimed, the assets of the Fredkin Foundation withered, and the prize was withdrawn. (I wonder if some of the things done in the 1980s and 1990s by users of Mathematica should have qualified—but Ed and I never made this connection, and it’s too late now.)

\n

Ed the Consultant

\n

Particularly during his time at MIT, Ed did a fair amount of strategy consulting for tech companies—and Ed would tell me many stories about this, particularly related to IBM and DEC (which were in the 1980s the world’s two largest computer companies).

\n

One story (whose accuracy I’ve never been able to determine) related to DEC’s ultimately disastrous decision not to enter the personal computer business. As Ed tells it, a team at DEC did a focus group about PCs—with Ken Olsen (CEO of DEC) watching. There was a young teacher in the group who was particularly enthusiastic. And Olsen seemed to be getting convinced that, yes, PCs were a good idea. As the focus group was concluding, the teacher listed off all sorts of ways PCs could change the world. But then, fatefully, he added right at the end: “And I don’t just mean here on Earth”. Ed claims this was the moment when Olsen decided to kill the PC project at DEC.

\n

Ed told a story from the early 1970s about a giant IBM project called FS (for “Future Systems”):

\n
\n

IBM has this project. They’re going to completely revolutionize everything. The project is to design everything from the smallest computer to the new largest. They’re all to be multiprocessors. The specs were just fantastic. They promised to guarantee their customers 100% uptime. Their plans were, for instance, when you have a new OS, it’s updated. They guarantee 24-hour operation at all times. They plan to be able to update the OS without stopping this process. Things like that, a lot of goals that are very lofty, and so on.

\n
\n
\n

Someone at IBM whom I knew very well, a very senior guy, came to me one day and said, “Look, these guys are in trouble, and maybe MIT could help them.” I organized something. Just under 30 professors of computer science came down to IBM. We got there on Sunday night and starting Monday morning, we got one lecture an hour, eight on Monday, Tuesday, Wednesday, Thursday, and four on Friday, describing the system. It was just spectacular, everything they were trying to do, but it was full of all kinds of idiocy. They were designing things that they’d never used. This whole thing was to be oriented about people looking at displays.

\n
\n
\n

No one at IBM had done anything like that. They think, “Okay, you should have a computer display,” and they came up with certain problems that hadn’t occurred to the rest of us. If you’re looking at the display, how can you tell the difference between what you had put into the computer and what the computer had put in? This worried them. They came up with a hardware fix. When you typed, it always went on the right half of the screen; when the computer did something, it always went on the left half, or I may have it backwards, but that was the hardware.

\n
\n
\n

\n
\n
\n

What happened is I came to realize that they were so over their head in their goal that they were going to annihilate themselves with this thing. It was just going to be the world’s greatest fiasco for it. I started cornering people and saying, “Look, do you realize that you’re never going to make this work?” and so on, so forth. This came to the attention of people at IBM, and it annoyed them. I got a call from someone saying, “Look, you’re driving us nuts. We want to hear you out, so we’re going to conduct a debate.” There’s a guy named Bob [Evans], who was the head of the project. What happened was we’re in the boardroom with IBM, lots of officials there, and he and I have a debate.

\n
\n
\n

I’m debating that they have to kill the project and do something else. He’s debating that they shouldn’t kill the project. I made all my points. He made all his points. Then a guy named Mannie Piore, who was the one who thought of the idea of having a research laboratory, a very senior guy said to me, he said, “Hey, Ed,” he said, “We’ve heard you out.” He says, “This is our company. We can do this product even if you think we shouldn’t.” I said, “Yes, I admit that’s true.” He said, “You presented your case. We’ve heard you out, and we want to do it.” I said, “Okay.” He said, “Can you do us a favor?” I said, “What’s that?’ He said, “Can you stop going around talking to people about why it has to be killed?” I said, “Look, I’ve said my piece. I’ve been heard out.” “Yes. Okay.” “I quit.”

\n
\n
\n

I had only one ally in that room; that was John Cocke. As we were walking out of the room, he came over to me and said, “Don’t worry, Ed.” He said, “It’s going to fall over of its own weight.” I’ll never forget that. Ten days later, it was canceled. A lot of people were very mad at me.

\n
\n

I’m not sure what Ed was like as an operational manager of businesses. But he certainly had no shortage of opinions about how businesses should be run, or at least what their strategies should be. He was always keen on “do-the-big-thing” ideas. I remember him telling me multiple times about a company that did airplane navigation. It had put a certain number of radio navigation beacons into its software. Ed told me he’d asked about others, and the company had said “Well, we only put in the beacons lots of people care about”. Ed said “Just put all of them in”. They didn’t. And eventually they were overtaken by a company that did.

\n

Ed the Businessman

\n

Ed’s great business success—and windfall—was III. But Ed was also involved with a couple dozen other companies—almost all of which failed. There’s a certain charm in the diversity of Ed’s companies. There was Three Rivers Computer Corporation, that made the PERQ computer. There was Triadex, that made the Muse. There was a Boston television station. There was an air taxi service. There was Fredkin Enterprises, importing PCs into the Soviet Union. There was Drake’s Anchorage, the resort on his island. There was Gensym, a maker of AI-oriented process control systems, which was a rare success. And then there was Reliable Water.

\n

Ed’s island—like many tropical islands—had trouble getting fresh water. So Ed decided to invent a solution, coming up with a new, more energy-optimized way to do reverse osmosis—with a dash of AI control. Reliable Water announced its product in May 1987, desalinating water taken from Boston Harbor and serving it to journalists to drink. (Ed told me he was a little surprised how willingly they did so.)

\n

Click to enlarge

\n

Looking at my archives I see I was sufficiently charmed by the picture of Ed posing with his elaborate “intelligent” glass tubing that I kept the article from New Scientist:

\n

Click to enlarge

\n

As Ed told it to me, Reliable Water was just about to sell a major system to an Arab country when his well-pedigreed CEO somehow cheated him, and the deal fell through.

\n

But what about the television station? How did Ed get involved with that? Apparently in 1969 Jerry Wiesner, then president of MIT, encouraged Ed to support a group of Black investors (led by a certain Bertram Lee) who were challenging the broadcasting license of Boston’s channel 7. Years went by, other suitors showed up, and litigation about the license went all the way to the Supreme Court (which described the previous licensee as having shown an “egregious lack of candor” with the FCC). For a while it seemed like channel 7 might just “go dark”. But in early January 1982 (just a couple of weeks before I first met him) Ed took over as president of New England Television Corporation (NETV)—and in May 1982 NETV took over channel 7, leaving Ed with a foot of acquisition documents in his home library, and a television channel to run:

\n

Click to enlarge

\n

There’d been hopes of injecting new ideas, and adding innovative educational and other content. But things didn’t go well and it wasn’t long before Ed stepped down from his role.

\n

A major influence on Ed’s business activities came out of something that happened in his personal life. In 1977 Ed had been married for 20 years and had three almost-grown children. But then he met Joyce. On a flight back from the Caribbean he sat next to a certain Joyce Wheatley who came from a prominent family in the British Virgin Islands and had just graduated with a BS in economics and finance from Bentley College (now Bentley University) in Waltham, MA. As both Ed and Joyce tell it, Ed immediately gave advice like that the best way to overcome a fear of flying was to learn to fly (which much later, Joyce in fact did).

\n

Joyce was starting work at a bank in Boston, but matters with Ed intervened, and in 1980 the two of them were married in the Virgin Islands, with Feynman serving as Ed’s best man (and at the last minute lending Ed a tie for the occasion). In 1981, Ed and Joyce had a son, who they named Richard after Richard Feynman (though now themed as “Rick”)—of whom Ed was very proud.

\n

When Ed died, Joyce and he had been married for 43 years—and Joyce had been Ed’s key business partner all that time. They made many investments together. Sometimes it’d start with a friend or vendor. Sometimes Ed (or Joyce) would meet students or others—who’d be invited over to the house some evening, and leave with a check. Sometimes the investments would be fairly hands-off. Sometimes Ed would get deeply involved, even at times playing CEO (as he did with Three Rivers and NETV).

\n

When the web started to take off, Ed and Joyce created a company called Capital Technologies which did angel investing—and ended up investing in many companies with names like Sourcecraft, SqueePlay, EchoMail, Individual Inc. and Radnet. And—like so many startups of this kind—most failed.

\n

Ed also continued to have all sorts of ideas of his own, some of which turned into patents. And—like so much to do with Ed—they were eclectic. In 1995 (with a couple of other people) there was one based on using evanescent waves (essentially photon tunneling) to more accurately find the distance between the read/write head and the disk in a disk drive or CD-ROM drive. Then in 1999 there was the “Automatic Refueling Station”—using machine vision plus a car database to automate pumping gas into cars:

\n

Click to enlarge

\n

That was followed in 2003 by a patent about securely controlling telephone switching from web clients. In 2006, there was a patent application named simply “Contract System” about an “algorithmic contract system” in which the requirements of buyers and sellers of basically anything would be matched up in a kind of tiling-oriented geometrical way:

\n

Click to enlarge

\n

In 2011 there was “Traffic Negotiation System”, in which cars would have rather-airplane-like displays installed that would get them in effect to “drive in formation” to avoid traffic jams:

\n

Click to enlarge

\n

Ed’s last patent was filed in 2015, and was essentially for a scheme to cache large chunks of the web locally on a user’s computer—a kind of local CDN.

\n

But all these patents represented only a small part of Ed’s “idea output”. And for example Ed told me many other tech ideas he had—a few of which I’ll mention later.

\n

And Ed’s business activities weren’t limited to tech. He did his share of real-estate transactions too. And then there was his island. For years Joyce and Ed continued to operate Drake’s Anchorage, and tried to improve the infrastructure of the island—with Ed, as Joyce tells it, more often to be found helping to fix the generator on the island than partaking of its beaches.

\n

Back in 1978 Ed had acquired a “neighbor” when Richard Branson bought Necker Island, which was a couple of miles further out towards the Atlantic than Moskito Island. Ed told me quite a few stories about Branson, and for years had told me that Branson wanted to buy his island. Ed hadn’t been interested in selling, but eventually agreed to give Branson right of first refusal. Then in 2007 a Czech (or were they a Russian?) showed up and offered to buy the island for cash “to be delivered in a suitcase”. It was all rather sketchy, but Ed and Joyce decided it was finally time to sell, and let Branson exercise his right of first refusal, and buy the island for about $10M.

\n

Ed and His Toys

\n

Ed liked to buy things. Computers. Cars. Planes. Boats. Oh, and extra houses too (Vermont, Martha’s Vineyard, Portola Valley, …)—as well as his island. Ed would typically make decisions quickly. A house he drove by. New tech when it first came out. He was always proud of being an early adopter, and he’d often talk almost conspiratorially about the “secret” features he’d figured out in new tech he’d bought.

\n

But I think Ed’s all-time favorite “toys” were planes—and over the course of his life he owned a long sequence of them. Ed was a serious (and, by all reports, exceptionally good) pilot—with an airplane transport pilot license (plus seaplane and glider licenses). And I always suspected that his cut-and-dried approach to many things reflected his experience in making decisions as a pilot.

\n

Ed at different times had a variety of kinds of planes, usually registered with the vanity tail number N1EF. There were twin-propellor planes. There were high-performance single-propellor planes. There was the seaplane that I’d “met” in the Caribbean. At one time there was a jet—and in typical fashion Ed got himself certified to fly the jet singlehandedly, without a copilot. Ed had all sorts of stories about flying. About running into Tom Watson (CEO of IBM) who was also a pilot. About getting a new type of plane where he thought he was getting #5 off the production line, but it was actually #1—and one day its engine basically melted down, but Ed was still able to land it.

\n

Ed also had gliders, and competed in gliding competitions. Several times he told me a story—as a kind of allegory—about another pilot in a gliding competition. Gliders are usually transported with their wings removed, with the wings attached in order to fly. Apparently there was an extra locking pin used, which the other pilot decided to remove to save weight, because it didn’t seem necessary. But when the glider was flying in the competition its wings fell off. (The pilot had a parachute, but landed embarrassed.) The very pilot-oriented moral as far as Ed was concerned: just because you don’t understand why something is there, don’t assume it’s not necessary.

\n

Ed and the Soviet Union

\n

One of the topics about which Ed often told “you-can’t-make-this-stuff-up” stories was the Soviet Union. Ed’s friend John McCarthy had parents who were active communists, had learned Russian, and regularly took trips to the Soviet Union. And as Ed tells it McCarthy came to Ed one day and said (perhaps as a result of having gotten involved with a Russian woman) “I’m moving to the Soviet Union”, and talked about how he was planning to dramatically renounce his US citizenship. McCarthy began to make arrangements. Ed tried to talk him out of it. And then it was 1968 and the Soviets send their tanks into Czechoslovakia—and McCarthy is incensed, and according to Ed, sends a telegram to a very senior person in the Soviet Union saying “If you invade Czechoslovakia then I’m not coming”. Needless to say, the Soviets ignored him. Ed told me he’d said at the time: “If the Russians were really smart and really understood things, and they had to choose between John McCarthy and Czechoslovakia, they should have chosen John McCarthy.” (McCarthy would later “flip” and become a staunch conservative.)

\n

Perhaps through McCarthy, Ed started visiting the Soviet Union. He didn’t like the tourist arrangements (required to be through the government’s Intourist organization)—and decided to try to do something about it, sending a survey to Americans who’d visited the Soviet Union:

\n

Click to enlarge

\n

A year later, Ed was back in the Soviet Union, attending a somewhat all-star conference (along with McCarthy) on AI—with a rather modern-sounding collection of topics:

\n

Click to enlarge

\n

Here’s a photograph of a bearded Ed in action there—with a very Soviet simultaneous translation booth behind him:

\n

Click to enlarge

\n

Ed used to tell a story about Soviet computers that probably came from that visit. The Soviet Union had made a copy of an IBM mainframe computer—labeling it as a “RYAD” computer. There was a big demo—and the computer didn’t work. The generals in charge asked “Well, did you copy everything?” As it turned out, there was active circuitry in the “IBM” logo—and that needed to be copied too. Or at least that’s what Ed told me.

\n

But Ed’s most significant interaction with the Soviet Union came in the early 1980s. The US had in place its CoCom list that embargoed export of things like personal computers to the Soviet Union. Meanwhile, within the Soviet Union, photocopiers were strictly controlled—to prevent non-state-sanctioned flow of information. But as Ed tells it, he hatched a plan and sold it to the Reagan administration, telling them: “You’re on the wrong track. If we can get personal computers into the Soviet Union, it breaks their lock on the flow of information.” But the problem was he had to convince the Soviets they wanted personal computers.

\n

In 1984 Ed was in Moscow—supposedly tagging along to a physics conference with an MIT physicist named Roman Jackiw. He “dropped in” at the Computation Center of the Academy of Sciences (which, secretly, was a supplier to the KGB of things like speech recognition tech). And there he was told to talk to a certain Evgeny Velikhov, a nuclear physicist who’d just been elected vice president of the Academy of Sciences. Velikhov arranged for Ed to give a talk at the Kremlin to pitch the importance of computers, which apparently he successfully did, after convincing the audience that his motivation was to make the world a safer place by balancing the technical capabilities of East and West.

\n

And as if to back up this point, while he was in the Soviet Union, Ed wrote a 5-page piece from “A Concerned Citizen, Planet Earth” addressed “To whom it may concern” in Moscow and Washington—ending with the suggestion that its plan might be discussed at an upcoming meeting between Andrei Gromyko and Ronald Reagan at the UN:

\n

Click to enlarge

\n

The piece mentions another issue: the fate of prominent, but by then dissident, Soviet physicist Andrei Sakharov, who was in internal exile and reportedly on hunger strike. Ed hatched a kind of PCs-for-Sakharov plan in which the Soviets would get PCs if they freed Sakharov.

\n

Meanwhile, in true arms-dealer-like fashion, he’d established Fredkin Enterprises, S.A. which planned to export PCs to the Soviet Union. He had his student Norm Margolus spend a summer analyzing the CoCom regulations to see what characteristics PCs needed to have to avoid embargo.

\n

In the Reagan Presidential Library there’s now a fairly extensive file entitled “Fredkin Computer Exports to USSR”—which for example contains a memo reporting a call made on August 25, 1984, by then-vice-president George H. W. Bush to Sakharov’s stepdaughter, who was by that time living in Massachusetts (and, yes, Ed was described as a “PhD in computer science” with a “flourishing computer business”):

\n

Click to enlarge

\n

Soon the White House is communicating with the US embassy in Moscow to get a message to Ed:

\n

Click to enlarge

\n

And things are quickly starting to sound as if they were from a Cold War spy drama (there’s no evidence Ed was ever officially involved with the US intelligence services, though):

\n

Click to enlarge

\n

I don’t think Ed ever ended up talking to Sakharov, but on November 6, 1984, Fredkin Enterprises was sent a letter by Velikhov ordering 100 PCs for the Academy of Sciences, and saying they hoped to order 10,000 more. But the US was not as speedy, and in 1985 there was still back and forth about CoCom issues. Ed of course had a plan:

\n

\n

And indeed in the end Ed did succeed in shipping at least some computers to the Soviet Union, adding a hack to support Cyrillic characters. Ed often took his family with him to Moscow, and he told me that his son Rick created quite a stir when at age 6 he was seen there playing a game on a computer. Up to then, computers had always been viewed as expensive tools for adults. But after Rick’s example there were suddenly all sorts of academicians’ kids using computers.

\n

(In the small world that it is, one person Ed got to know in the Academy of Sciences was a certain Arkady Borkovsky—who in 1989 would leave Russia to come work at our company, and who would later co-found Yandex.)

\n

By the way, to fill in a little color of the time, I might relate a story of my own. In 1987 I went to a (rather Soviet) conference in Moscow on “Logic, Methodology and Philosophy of Science.” Like everyone, I was assigned a “guide”. Mine continually tried to pump me for information about the American computer industry. Eventually I just said: “So what do you actually want to know?” He said: “We’ve cloned the Intel 8086 microprocessor, and we want to know if it’s worth cloning the Motorola 68000. Motorola has put a layer of epoxy that makes it hard to reverse engineer.” He assumed that the epoxy was at the request of the US government, to defeat Soviet efforts—and he didn’t believe me when I said I thought it was much more likely there to defeat Intel.

\n

Ed told me another story about his interactions with Soviet computer efforts after Gorbachev came to power:

\n
\n

Before the days of integrated circuits the way IBM and Digital built computers was they put the whole computer together, and then it would sit for six weeks in “system integration” while they made the pieces work together and slowly got the bugs out.

\n
\n
\n

The Russians built computers differently because that seemed logical to them. They’d send all the components down there and then some guy was supposed to plug them together, and they were supposed to work. But they didn’t. With these big computers, they never made any of them work.

\n
\n
\n

The Academy of Sciences had one. And one time I went to see their big computer, so they unlock the doors to this dusty room where the computer is, where it’s not being used because it doesn’t work, and all this information is being kept secret, not from the United States, but from the leadership. When I discovered all this I documented it … and I wrote a 40-page document that explained it.

\n
\n
\n

I was making trips with Rick often and Mike [his older son] very often. On one trip when I arrived, they tell me, “Oh, you have to come to this meeting.”

\n
\n
\n

I don’t speak Russian. I never knew it. I’m seated at this meeting, and there’s a Russian friend of mine [head of the Soviet Space Research Institute] next to me. We’re just sitting there, and things are going on. I still don’t know what that meeting was, but I had this 40-page document. I gave it to my friend. He starts reading. He says, “Oh, this is so interesting.” It got to be about ten o’clock at night and they said, “Everyone come back in the morning. Nine o’clock.”

\n
\n
\n

My friend said, “Can I borrow this [document]? I’ll bring it back in the morning”. I said, “Sure, go ahead.” He comes back next morning. He says to me, “I have good news, and I have bad news.” I said, “What’s the good news?” He says, “Your document has been translated into Russian.” I said, “You left here with a 40-page typewritten document. I don’t believe you.” He said, “Well, my institute recently took on the task of translating scientific American into Russian.

\n
\n
\n

“When I left here, I went to my institute, called in the translators, and they all came in. We divided the document up between them, and it’s all been translated into Russian.”

\n
\n
\n

The document was the analysis of the RYAD situation with the recommendation that the only thing they could do was to cancel it all.

\n
\n
\n

I said, “Okay, what’s the bad news?” He says, “The bad news is it’s classified secret.” When you made a copy or did something, you had to have a government person look at it. They classified it. I said to him, “You can’t classify my documents.” He said, “Of course not. We haven’t. It’s just the Russian one that’s secret.”

\n
\n
\n

Then maybe a week later, he said, “Gorbachev’s read your document.” He canceled it. RYAD. Some people I know were looking to kill me.

\n
\n
\n

In Moscow, there’s a building that’s so unusual. It’s on a highway leading into the city. It’s about five stories high. It’s about a kilometer long, okay? It’s a giant building. I was in it a few years ago, and it’s just a beehive of startups, almost all software startups. That was the RYAD Software Center, okay? 100,000 people got put out of work.

\n
\n

Ed Becomes a Physics Professor

\n

When I first met Ed in 1982, he was in principle a professor at MIT. But he was also CEOing a computer company (Three Rivers), and, though I didn’t know it at the time, had just become president of a television channel. Not to mention a host of other assorted business activities. MIT had a policy that professors could do other things “one day a week”. But Ed was doing other things a lot more than that. Ed used to say he was “tricked” out of his tenured professorship. Because in 1986 he was convinced that with all the other things he had going on, he should become an adjunct professor. But apparently he didn’t realize that tenure doesn’t apply to adjunct professors. And, as Ed told it, the people in the department considered him something of a kook, and without tenure forcing them to keep him, were keen to eject him.

\n

Minsky’s neighbor in Brookline, MA, was a certain Larry Sulak—the very energetic chairman of the physics department at Boston University (and someone I have known since the 1970s). Ed knew Sulak and when Ed was ejected from MIT, Sulak seized the opportunity to bring Ed in as a physics professor at Boston University. Sulak asked me to write a letter about Ed (and, yes, particularly after the research for this piece, there are some things I would change today):

\n

\n
\n
\n
Subject: Re: Ed Fredkin
\n
Date: Aug 24, 1988
\n
From: Stephen Wolfram
\n
To: Larry Sulak
\n
\n

\n
\nDear Larry:

\n

In this century, people like Ed Fredkin have been very rare. Ed Fredkin
\nis a gentleman scientist. He has made several fortunes in business, yet
\nhe chooses to spend much of his time thinking about science.

\n

The main thing he thinks about is what ideas from computing can tell us
\nabout physics. This is an area that I believe has fundamental importance
\nfor physics. There are many issues about the behaviour of complex
\nphysical systems where the best hope for analysis and understanding comes
\nfrom computational ideas. There are also many traditional problems
\nin quantum physics and other fundamental areas that I suspect are most
\nlikely to be solved by thinking about things from a computational point
\nof view.

\n

Ed Fredkin has had some very good ideas about physics and its relation
\nto computation. Probably the single most important was his independent discovery
\nof the possibility of thermodynamically reversible computation.
\nvon Neumann got this wrong — by thinking about things from a computational
\npoint of view, Fredkin got it right.

\n

Fredkin has been convinced for many years that cellular automata —
\nbasically computational models — could describe fundamental physical
\nprocesses. As you know, I have worked on using cellular automata to
\nmodel various specific physical processes. Fredkin is trying to do something
\ngrander — he wants to show that all of physics can be reproduced by
\na cellular automaton. If he is right the discovery would be one whose
\nimportance could be compared to the discovery of quantization.
\nOf course, what he is trying to show may not be true, but that is a risk
\nthat any new fundamental idea in physics faces.

\n

Ed Fredkin’s style is not typical of scientists. He is more used to
\naddressing boards of directors than lecture audiences. He learned
\nthe kind of physics that is in the Feynman lectures by spending time
\nwith Dick Feynman rather than reading his books. To some standard
\nscientists, Fredkin at first seems like a nut. To be sure, some of his
\nideas are pretty nutty. But if you listen and think about it, there
\nis much substance to what Fredkin has to say.

\n

I gather that Fredkin has decided to spend some time around “ordinary
\nphysicists”, to try and work out how his ideas fit in with current
\nphysical thinking. I believe you are very lucky that Fredkin wants
\nto do this in your department.

\n

Best wishes,
\nStephen

\n
\n
\n

\n

And so it was that Ed became a research professor of physics at Boston University (BU). At MIT he’d gotten a DARPA grant that supported Tom Toffoli and Ed’s only “physics PhD student” Norm Margolus in building ever-larger “cellular automaton machines”. And when Ed moved to BU, this effort moved with him, leaving in effect “no trace of Ed” at MIT.

\n

When Ed arrived at BU he found he was assigned to an office with a certain Gerard ‘t Hooft—who happens to be one of the more creative and productive theoretical physicists of the past half-century (and would win a Nobel Prize in 1999 for his efforts). Ed became friends with ‘t Hooft, inviting him and his family to spend time on his island, and later on the boat that Ed bought in the south of France. Feynman died in 1988, and Ed would tell me that he thought he’d “traded” one great physicist for another. (Feynman had suggested Ed try Sidney Coleman, but Coleman wasn’t into it.)

\n

Like Feynman, I think ‘t Hooft felt a little uneasy with Ed’s statements about physics. But in 2016 ‘t Hooft ended up publishing a book entitled The Cellular Automaton Interpretation of Quantum Mechanics. I thought it was a nice recognition of ‘t Hooft’s friendship with Ed. But Ed told me in no uncertain terms that he thought ‘t Hooft hadn’t given him the credit he was due—though in reality I don’t think what ‘t Hooft did was much related to Ed’s actual work and ideas. (And, by the way, it’s not directly related to my efforts either, though conceivably looking at “generational states” in our Physics Project may give something at least somewhat analogous.)

\n

In 1994 Ed’s direct affiliation with BU ended—though he remained on good terms with the department, and after I moved to the Boston area in 2002 I would often see him at an annual dinner the BU physics department put on for “Boston-area physics people”.

\n

In 1998 Ed would summarize himself like this:

\n
\n

Ed Fredkin has worked with a number of companies in the computer field and has held academic positions at a number of universities. He is a computer programmer, a pilot, advisor to businesses and an amatuer [sic] physicist. His main interests concern digital computer like models of basic processes in physics.

\n
\n

For a while, Ed didn’t have a “university affiliation” (except, through Minsky, as a visitor at the MIT Media Lab), but in 2003—through his friend Raj Reddy—he became a professor (now of computer science) at Carnegie Mellon University, for a while spending time at their West Coast outpost, but mostly just making occasional trips in his plane to Pittsburgh.

\n

Forty Years of Interactions with Ed

\n

For a few years after I first met Ed in 1982, I’d see him fairly regularly. In 1983 I invited him to the first “modern” conference on cellular automata, that I co-organized at Los Alamos. I visited his house in Brookline, MA, a few times. I saw him at the Aspen Center for Physics, and at other places around the world. He was always fun and lively—and told great stories about all sorts of things. He gave the impression that he was mostly spending his time doing big things in business, and that science was an avocation for him. Sometimes he would talk about cellular automata—though I now realize that what he said was either very general and philosophical (leaving me to interpret things in my own way), or very specific to particular rules he’d engineered.

\n

It was always a bit uncomfortable when it came to physics. Because the things Ed was saying always seemed to me pretty naive. Quite often I would challenge them—and frustratedly tell Ed that he should learn twentieth-century physics. But Ed would glide over it—and be off telling some other (engaging) story, or some such.

\n

In 1986 I co-organized (with Tom Toffoli and Charles Bennett) a conference called Cellular Automata ’86—at MIT. Ed didn’t come—and I think I had the impression that he’d rather lost interest in cellular automata by that time. I myself went off to start my Center for Complex Systems Research, and then to found Wolfram Research and start the development of Mathematica. Mathematica was released on June 23, 1988—and our records (yes, we’ve kept them!) show that Ed registered his first copy on December 14, 1988. In March 1991 I did a lecture tour about Mathematica 2.0, and saw Ed one last time before diving into work on my book A New Kind of Science—which led me for more than a decade to became an almost complete scientific hermit.

\n

I saw Ed (now 62 years old) when I briefly “came up for air” in connection with the release of Mathematica 3.0 in 1996, and we continued occasionally to exchange pleasant emails:

\n

\n
\n
\n
Date: Sun, 29 Jun 1997 15:49:41 -0400
\n
From: Ed Fredkin
\n
To: Stephen Wolfram
\n
\n

\n
\n…

\n

[Reporting the birth of my second child]

\n

\n

For many children its worst when they are teenagers. Some glide through
\nthat period of life without hassle. Rick is doing great (at 15) despite
\nhis unorthodox education. He relishes calling his parents dopes, but
\naside from arguments about subjects like how late he should be able to
\nhang out with his buddies, its clear that he doesn’t think we’re dopes.

\n

\n

I promise to read your book as soon as I get it!

\n

\n

Its nice to hear from you. News here is that I am no longer needed at
\nRadnet as they now have a great CEO. I got a new airplane in December.
\nIt’s called a Cessna CitationJet. It can carry 7 people at about 440
\nmph. So far its been a lot of fun. We’ll have to think of an excuse to
\ngo for a ride. We are planning to spend some time at Drake’s Anchorage
\nin July. Its great for kids so if that interests you, let me or Joyce
\nknow.

\n

I have taken as a challenge to architect a computer (that weighs a few
\nkilos) that assumes another 100 years of Moore’s Law (10^15 in cost
\nperformance). There are a lot of unsuspected problems lurking in the
\ndetails, but everyone of them seems to have easy solutions. I have
\ngiven a number of talks (IBM Almaden and Watson labs, Intel, NYU,
\netc.). Interest in reversible computing has picked up since heat
\ndissipation has gotten to be a really hot topic (no pun intended). The
\nnext high end Alpha may dissipate as much as 150 watts. Think of a
\nlight bulb!

\n

I use Mathematica for something almost every week… keep it up!

\n

Best regards,

\n

Ed\n

\n
\n

\n

Although I didn’t see Ed myself for quite a few years, Ed would always write to ask for betas of new versions of Mathematica, and he would sometimes chat with staff from my company at trade shows. I thought it a bit odd in 1999 when I heard that in such an encounter he said that he was the one who had “introduced me to cellular automata”. And, moreover, that he, Feynman and Murray [Gell-Mann] were the people who’d suggested I write SMP—which was particularly bizarre since, among other things, I hadn’t met (or even heard of) Ed until about 3 years later.

\n

Then, out of the blue on September 13, 2000, Ed calls my assistant, and follows up with an email:

\n

\n
\n
\n
Subject: Invitation
\n
Date: Wed, 13 Sep 2000 23:53:09 -0400
\n
From: Ed Fredkin
\n
To: Stephen Wolfram
\n
\n

\n
\nHi,

\n

The primary reason I’m contacting you has to do with a program I’m
\norganizing at Carnegie Mellon (CMU). I wrote a proposal to the NSF, called
\n“The Digital Perspective” and got funded. The idea is to invite a number (8
\nto 10) of guests to come to CMU for a few days, to meet with students and to
\ngive a Distinguished Lecture. The NSF would also like to arrange for the
\nguests to come to Washington D.C. and give the same lecture there.

\n

By “Digital Perspective” I mean looking at aspects of the world as Digital
\nProcesses. As you know, I am most interested in looking at physics this
\nway. I have just started getting commitments from potential participants.
\nGerard ‘t Hooft has agreed to come and a number of other good physicists are
\nthinking about it.

\n

\n

Please consider this to be a formal invitation. Of course, CMU will pay
\nexpenses and an honorarium. If the timing works out, it can probably be
\narranged for many of the students to have read your book before you come.
\nYou might get some good feedback from bright students who have also gained
\nfamiliarity with the thoughts of others who are thinking about the “Digital
\nPerspective”. The seminar will run throughout the 2000-2001 academic year.

\n

If you can make it to CMU, I expect that it will be fun and interesting;
\nboth for you, for me and for many others.

\n

…\n

\n
\n

\n

I responded:

\n

\n
\n
\n
Subject: RE: Invitation
\n
Date: Thu, 14 Sep 2000 06:49:19 -0500
\n
From: Stephen Wolfram
\n
To: Ed Fredkin
\n
\n

\n
Thanks for the invitation, etc.

\n

It sounds like a thing I’d like to do, but I can only consider
\n*anything* after my book is finished.

\n

\n

If my book is done in time for your program, then, yes, I’d like to
\nparticipate (though of course I’d want more details about the actual
\nplans etc. etc.). But if the book isn’t done, then sadly I just can’t.
\nIf the cutoff time is June 2001, I am not extremely hopeful that the
\nbook will be done … but if it’s fall 2001 the probabilities go up
\nsubstantially (though, sadly, they are still not 100%).

\n

\n

And what are you up to these days? Business? Science? Other?

\n

On another topic:
\nIn my book, I’m trying very hard to write accurate history notes about
\nthe things I discuss. And for the notes on the history of cellular
\nautomata I’ve been meaning for ages to ask you some questions…

\n

I’m not sure this is a complete list, but here are a few I’ve been
\ncurious about for a long time that I’d really like to know the answers
\nto…

\n

I know that history is hard … even if it’s about oneself. I consider
\nthat I have a good memory, but it’s often hard for me to keep straight
\nwhat happened when, and why, etc. But anything you can tell me about
\nthese questions … or about other aspects of CA history … I’d be very
\ngrateful for.

\n

1. As far as you know, did you invent the 2D XOR CA rule? (I’m assuming
\nthe answer is “yes”…)

\n

2. In what year did you first simulate this CA? On what computer?
\nWhere?

\n

3. What other CA rules did you study at that time?

\n

4. Do you still have any material from the simulations you did
\n(printouts, tapes, programs, etc.)?

\n

5. When you learn about the “munching squares” display hack? How did it
\nrelate to your work on the XOR CA?

\n

6. What did you know about the work done by Unger etc. on cellular image
\nprocessors? How did this relate to your work?

\n

7. What did you know about von Neumann’s work on cellular automata? How
\ndid it relate to your work?

\n

8. What did you know about Ulam and others’ work at Los Alamos on
\nsimulating cellular automata? How did it relate to your work?

\n

9. Were you aware of work on cryptographic applications of CA-like
\nsystems?

\n

…\n

\n
\n

\n

Ed responded:

\n

\n
\n
\n
Subject: RE: Invitation
\n
Date: Fri, 15 Sep 2000 01:25:23 -0400
\n
From: Ed Fredkin
\n
To: Stephen Wolfram
\n
\n

\n
\nHi,

\n

Here are some answers and some free association type ramblings.

\n

\n

> And what are you up to these days? Business? Science? Other?

\n

I’m winding down on business (I’m into one last e-business project) and like
\nyou, working on a book. My guess is that mine is nowhere as ambitious as
\nyours… It’s just to document my ideas about Digital Mechanics (Physics).
\nIn any case, these ideas have made more progress in the last 2 years than in
\nthe previous 40.

\n

I bought a sailboat which is moored in Antibes, France. I spent most of the
\nsummer there and got more science done than in the prior several years.
\nIt’s absolutely the perfect place and circumstance for me to work on my
\nstuff. Gerard ‘t Hooft (plus wife and daughter) came down and joined us for
\na while. You know (I hope) about his interest in CA’s? I’m going back
\nthere for a few weeks on Tuesday.

\n

Here’s a formal proof that you can, at any time, escape all your normal
\nresponsibilities and concentrate exclusively on one really important thing
\n(hint, hint). The proof is that, at any time, YOU CAN DIE. I don’t mean to
\nbe morbid, but sometimes it makes good sense to consider that proof and
\ntemporarily abandon all but some very important task (or some very exciting
\nor fun thing).\n

\n
\n

\n

Ed continued with a long response to my “history questionnaire”:

\n

\n
\n
\n
> 1. As far as you know, did you invent the 2D XOR CA rule? (I’m assuming
\n> the answer is “yes”…)

\n

Yes, as far as I know I did invent it. Here is what I did. I decided to
\nlook for the simplest possible rule that met certain criteria. I wanted
\nspatial symmetry and a symmetric rule vis-à-vis the states of the cells.
\nThe thought was to find something so simple that its behavior could be
\nunderstood while not so simple as to be totally dull. The first such rule I
\ntried was the XOR rule. I programmed it first on the PDP-1 (1961, at BBN
\nand III) where I could see it on the display, and later I wrote a program
\nfor CTSS using a model 33 teletype as a terminal. My motivation was then,
\nas it is now, to be able to capture more and more properties of physics
\nwithin a Digital model. I found an easy proof as to why patterns reappeared
\nin any number of dimensions. I also found, at the beginning, a formula for
\nthe number of ones as a function of time from a single one as the initial
\nstate. My recollection was that it was something like 2D 2^b(t) where D is
\nthe number of dimensions, t is the time step, and b(t) is the number of bits
\nthat are one in the binary representation of t (the tally function). After
\nI showed all this to Seymour Papert, he generalized the proof re self
\nreplication from XOR (sum mod 2) to sum mod any prime. (Some time around
\n1967)

\n

> 2. In what year did you first simulate this CA? On what computer?

\n

Where?

\n

See above.

\n

> 3. What other CA rules did you study at that time?

\n

I found a simple proof that a von Neumann neighborhood CA could exactly
\nemulate any other (such as the 3×3 neighborhood) and used this as a reason
\nto look at nothing else. I explored so many different rules that I probably
\nwould have found the game of Life had I not put blinders on. After I came
\nto MIT (1968), I had 2 things in mind, to find a really simple Universal CA
\n(I call them UCA’s )and to find Reversible, Universal CA’s (RUCA’s)
\nAs you may know, the search for UCA’s went slowly until I had the idea to
\nabandon the Turing Machine model and look at modeling digital logic and
\nwires. Within 15 minutes after this idea occurred to me, I had a 4 state
\nUCA on my blackboard. At that time the best known was in Codd’s thesis; an
\n8 state UCA. I showed this to a student of mine, Roger Banks, who had been
\nstruggling for a few years trying to complete an AI PhD thesis. The next
\nmorning both he and I showed up with 3 state UCA’s. He switched his PhD topic
\nand found a 2 state, von Neumann neighborhood UCA, a thing that Codd
\npurported to have proved impossible.

\n

While at BBN, after seeing all my 2-D CA’s expanding with simple
\nkaleidoscope like symmetries, (like the diamond shapes in the XOR rule),
\nMarvin Minsky challenged me to find a rule (any rule) that showed spherical
\npropagation. I took the challenge and shortly came up with such a rule.

\n

With respect to reversibility, the first satisfactory RUCA was done by
\nNorman Margolus. I shortly thereafter found a simple RUCA that didn’t need
\nthe use of the Margolus Neighborhood trick.

\n

> 4. Do you still have any material from the simulations you did
\n> (printouts, tapes, programs, etc.)?

\n

Yes, Probably, quite a bit

\n

> 5. When you learn about the “munching squares” display hack? How did it
\n> relate to your work on the XOR CA?

\n

I don’t recall it having any effect. It’s very unlikely that I knew of it
\nprior to the XOR CA.

\n

> 6. What did you know about the work done by Unger etc. on cellular image
\n> processors? How did this relate to your work?

\n

I knew of it second hand, but I don’t think it had any effect. Do you know
\nabout Farley and Clark (Wes Clark) and their publication while at MIT’s
\nLincoln Labs in the late 50’s?

\n

> 7. What did you know about von Neumann’s work on cellular automata? How
\n> did it relate to your work?

\n

At the time I did the XOR work I had not read anything about the von Neumann
\nCA, but I was told about it and I understood the concept very well. Many
\nyears later I read something (by Burkes, I think). I remember knowing that
\nit was a 29 state system and that it knew left from right in order to extend
\nand turn its construction arm.

\n

> 8. What did you know about Ulam and others’ work at Los Alamos on
\n> simulating cellular automata? How did it relate to your work?

\n

All I knew about Ulam and CA is that, like the Hydrogen Bomb, he had key
\nideas but probably didn’t get as much credit as he deserved. All my
\nknowledge re Ulam was anecdotal. As to what he did vs. what von Neumann did
\nI didn’t really know anything.
\nI didn’t know anything about anyone else actually simulating CA’s however
\nI’m pretty sure I assumed that others must have done so. It was so easy and
\nso obvious. While the use of a computer with a display (such as the Lincoln
\nLab TX-0 and TX-2, the Digital PDP-1 and the IBM 709 and 7090 all had or
\ncould have CRT displays, it was easy enough to display simple CA’s with a
\nprinter, even a 10 CPS teletype.

\n

> 9. Were you aware of work on cryptographic applications of CA-like
\n> systems?

\n

I thought I invented that idea! As soon as I found ways to make RUCA’s it
\noccurred to me that they could be used for cryptography. As an aside, when
\nWitt Diffey [Whit Diffie] came up with the idea of public key cryptography,
\nwhich needed a trapdoor function, I thought of using the product of 2 large primes.
\nI had just written the first program, in LISP, to implement Michael Rabin’s first
\nversion of a probabilistic prime test. As soon as I implemented it I
\nstarted a search at 10^100 and discovered that 10^100 +35,737 and 10^100
\n+35,739 were prime. A week later I met Rich Schroeppel in LA (he was
\nworking for my company, III) and knowing a larger prime pair than anyone
\nelse on Earth I told Rich and he was blown away. He was seated at a PDP-10
\nterminal and all he said was an emphatic “Really!” He then went type, type,
\ntype for a few seconds and turned around and said “You’re right!” which blew
\nme away! I asked what he did and he said (while knowing nothing of Rabin’s
\nmethod) “all I did was look at 3^(n-1) mod n, you know, Fermat’s little
\ntheorem, it usually gives 1 for primes.”

\n

I’m rambling, probably about stuff of no interest to you. Anyway, I stopped
\nRon Rivest in the hallway at Tech Square and asked if he had heard of
\nDiffey’s [Diffie’s] stuff. I don’t remember exactly what he said but I know that when
\nI told him that Rabin’s new method to find large primes meant that the
\nproduct of 2 primes was a good trapdoor function he was surprised and
\nthought it was a good idea! I never thought any more about it and hadn’t
\ncome up with the idea of using the phi function… Years later, long after
\nRSA was a big thing Ron reminded me of the event… Don Knuth told me that
\nhe also thought of using the product of 2 primes before RSA, but he couldn’t
\nhave known about Rabin’s method when I did (as Rabin told it to me right
\nwhen he thought it up!)

\n

By the way, I have an interesting algorithm for factoring smaller numbers,
\nsuch as can be done in less than an hour with Mathematica (normal
\nFactorInteger or ECM). I’ve written a few terribly unoptimized Mathematica
\nfunctions that implement the method. For what its good for, my Mathematica
\nfunctions (not compiled or anything) make Mathematica factor in a lot less
\nreal time than Mathematica does with FactorInteger or ECM.

\n

The big news re me and my work is what’s happening right now. Whatever one
\nthinks about my stuff (Digital Mechanics), it’s vastly improved. However
\nit’s still very far from a complete theory. Of course, Digital Mechanics is
\nabout CA’s.

\n

If you have any interest in reversibility, I’ve done lots in that area,
\nranging from RUCAs, conservative logic, and my transforms. The transforms
\nare general methods of converting algorithms that calculate the approximate
\ntime evolution of a system (approximate because of round off, truncation and
\nthe finite delta t) which is approximately reversible (by changing delta t
\nto minus delta t) into an equivalent algorithm that calculates approximately
\nthe same thing going forwards, but which is exactly reversible (being
\ncalculated on a computer with round off and truncation error). I also have
\na lot of methods for making RUCA’s with particular properties.

\n

You’ve criticized me in the past for not publishing stuff, but I’m so
\nambitious as to what I’m trying to do that I haven’t had the motivation to
\npublish all the little things I’ve uncovered along the way.

\n

I’m sure I discovered more and better ways to make all kinds of RUCAs before
\nanyone else with the exception of the rule found by my student, Margolus.

\n

Finally, one last anecdote. You and I were at some meeting long ago (maybe
\nSanta Fe?) and you brought along an early Sun to demonstrate your collection
\nof different kinds of 1-D CA’s. After your talk, I asked you why none of
\nthe CA’s you showed were reversible. Your response was “Because all
\nreversible CA’s are trivial.” That really was a very common belief,
\ncoincident with most people’s intuition. On the spot, I made up a rule,
\nusing your convention for specifying it, of an “interesting” reversible CA.
\nYou typed it in and ran it. Being surprised is one of the best kinds of
\nexperiences we ever have.

\n

As Emerson once quipped, “My apologies for such a long email, I didn’t have
\nthe time to write you a short one.”

\n

I’m having fun; it’s a good thing to do!

\n

Best regards

\n

Ed F

\n

PS If you have any interest in having parts of your book read so that you
\ncan get comments prior to publication, I have an idea that might be useful.

\n
\n

\n

A little later he added:

\n

\n
\n
\n
Subject: error
\n
Date: Fri, 15 Sep 2000 10:12:15 -0400
\n
From: Ed Fredkin
\n
To: Stephen Wolfram
\n
\n

\n
\nHi,

\n

Looking at my long email I noticed a boo boo.

\n

Where I wrote, quoting Schroeppel talking about 3^(n-1) mod n, “…it
\nusually give a 1 for primes…” very true but a bit of an understatement.
\nOf course, it ALWAYS gives a 1 for primes! What Schroeppel said was that it
\nusually doesn’t give a 1 for non-primes. It’s incorrect for 91 and 121 and
\nlots of other small numbers, but seems to work better for large numbers…
\nbut then you probably know much more about such things than I do. Also
\nlooking at your questions, I had the feeling that some might have been
\nprompted by my circa 1990 Digital Mechanics paper. If so, I guess I
\nrepeated stuff already in the paper and I apologise.

\n

Regards,

\n

Ed F\n

\n
\n

\n

I responded, asking for various pieces of clarification (and now that I’m writing this piece I would have asked even more, because some key parts of what Ed said I now realize don’t add up):

\n

\n
\n
\n
Subject: Re: your mail
\n
Date: Wed, 20 Sep 2000 21:04:54 -0500
\n
From: Stephen Wolfram
\n
To: Ed Fredkin
\n
\n

\n
\n…

\n

>> 1. As far as you know, did you invent the 2D XOR CA rule?
\n>>

\n> Yes, as far as I know I did invent it. Here is what I did. ….
\n>

\n

Very interesting.

\n

1a. Did you ever look at 1D CAs? If not, why not?

\n

1b. Did you think about analogies between XOR rules and linear feedback
\nshift registers?

\n

1c. Did you think about analogies between XOR rules and Pascal’s
\ntriangle?

\n

By the way, the result about the number of binomial coefficients mod a
\nprime has been independently discovered a remarkable number of times
\n(including by me). The earliest references I know are Edouard Lucas
\n(1877) and James Glaisher (1899).

\n

….

\n

>> 3. What other CA rules did you study at that time?

\n

> … I explored so many different rules that I probably
\nwould have found the game of Life had I not put blinders on.

\n

By the way, I happened to have a long phone conversation recently with
\nJohn Conway about the history of the Game of Life. I still haven’t
\nquite got to the bottom of exactly what Conway was doing and why (I
\nthink he wants some of the history lost, which is a pity, because it is
\ninteresting and reflects much better on him than he seems to
\nbelieve…) But what is clear is that Conway (and his various helpers)
\nhad much more serious motivations from recursive function theory etc.
\nthan is ever usually mentioned. It was just not a “find an amusing
\ngame” etc. piece of work.

\n

> Marvin Minsky challenged me to find a rule (any rule) that showed spherical
\npropagation. I took the challenge and shortly came up with such a rule.

\n

I don’t believe I’ve ever seen your rule of this kind. I showed such a
\nrule to Marvin in 1984 and he said “that’s very interesting; we were
\nlooking for these but hadn’t found any”. So I’m confused about
\nthis….

\n

\n

>> 6. What did you know about the work done by Unger etc. on cellular image
\nprocessors? How did this relate to your work?

\n

> I knew of it second hand, but I don’t think it had any effect.

\n

Wasn’t BBN quite involved with cellular image processing? And I believe
\nyou worked on aerial photography analysis. Did you use cellular
\nautomata for image processing?

\n

\n

>> 9. Were you aware of work on cryptographic applications of CA-like
\nsystems?

\n

> I thought I invented that idea!

\n

There was a lot of work done on 1D CAs by some distinguished
\nmathematicians consulting for the NSA in the late 1950s. I think much
\nof it is still classified. But over the years I’ve talked to many of
\nthe people involved (Gustav Hedlund, Andrew Gleason, John Milnor, some
\nNSA folk, etc. etc.), and read their unclassified papers. They figured
\nout some interesting stuff. They thought of it as related to nonlinear
\nfeedback shift registers.

\n

> As soon as I found ways to make RUCA’s it
\noccurred to me that they could be used for cryptography.

\n

How?

\n

There’s a 1D CA (rule 30) that I studied in 1984 that has been
\nextensively used as a randomness generator (e.g. Random[Integer] in
\nMathematica uses it), and that has been used a bit as a cryptosystem.

\n

I tried to make a good public key system out of CAs in the mid-1980s
\n(mostly in collaboration with John Milnor), but did not come up with
\nanything satisfactory. …

\n

\n

> I also have a lot of methods for making RUCA’s with particular properties.

\n

I am definitely somewhat interested in these things. They don’t happen
\nto be central to my grand scheme. But they are obviously worthwhile …
\nAND WORTH (you) WRITING DOWN!!

\n

I’m sure I discovered more and better ways to make all kinds of RUCAs before
\nanyone else with the exception of the rule found by my student, Margolus.

\n

Interesting. You probably know that the general problem of telling
\nwhether an arbitrary 2D CA is reversible is undecidable (the question
\ncan be mapped to the tiling problem).

\n

So I’m taking it that you have some good methods for generating 2D
\nreversible CAs. That’s obviously interesting.

\n

> Finally, one last anecdote. … I asked you why none of
\nthe CA’s you showed were reversible. Your response was “Because all
\nreversible CA’s are trivial.” …

\n

This anecdote can’t be quite right. I have known since 1982 that there
\nare nontrivial things that can happen in CAs that are made reversible by
\nyour mod 2 trick. What is true (and may have been what I was saying)
\nis that none of the 2-color nearest neighbor CAs that are reversible are
\nnon-trivial. With more colors or more neighbors, that changes. I’m
\nguessing that what you showed me was a 4-color nearest neighbor CA that
\nis reversible … and that is of course quite easy to get by recoding a
\n2-color one that has your mod 2 trick.

\n

By the way, I heard third hand a while back that you had “introduced me
\nto CAs”. For what it’s worth, that isn’t correct. My first “CA
\nexperience” was actually in 1973 (when I was 13) when I tried to program
\nmolecular dynamics on a very small computer, and ended up with something
\nequivalent to the square CA fluid model. My next CA experience was in
\nsummer 1981. I was trying to make models of “self organizing” systems
\n(now I hate that term), particularly self-gravitating gases. I ended up
\nsimplifying the models until I got 1D CAs. That fall I spent a month at
\nthe Institute for Advanced Study, and spent a lot of time studying von
\nNeumann’s work, etc., and analysing all sorts of features of 1D CAs. I
\ncame for a day to give a talk at MIT, and was having dinner with some
\nLCS people (Rich Zippel was one of them), and they told me about your
\nwork. Later that fall I talked with Feynman a certain amount about what
\nI was doing with CAs, and he again mentioned you. (I think he had been
\nto your Physics of Computation meeting, which was perhaps in June 1981,
\nbut I didn’t discuss the CA aspects of the meeting with him.) Then in
\n[January 1982] I came to the meeting you had on your island, and Tom Toffoli
\nshowed me his 2D CA machine (at the time he gave me the impression of
\n95% hackery, 5% science), and you showed me the 2D XOR CA on a PERQ
\ncomputer.\n

\n
\n

\n

Ed didn’t respond to this, but three days later we talked on the phone. I sent some (unvarnished) notes from the call to a research assistant of mine:

\n

\n
\n
\n
Subject: Fredkin conversation
\n
Date: Sat, 23 Sep 2000 03:05:43 -0500
\n
From: Stephen Wolfram
\n
\n

\n
\nI had a long conversation with Ed this evening.

\n

About his work in science, my work in science, etc.

\n

A few things mentioned:

\n

– He feels bitter that his paper on reversible logic, coauthored with
\nTom Toffoli, was actually all his (Ed’s) work

\n

– He is pleased that I will discuss history even when people haven’t
\npublished things (of course he has published little)

\n

– He says he has written about 150 pages about his views of physics; he
\nis planning to prepare something, perhaps for publication, in about a
\nyear

\n

– He says he missed not being able to bounce ideas off Dick Feynman …
\neven though Feynman often ended up screaming at him (Ed) about how dumb
\nhis ideas were

\n

– He said that his main problem was that he has been trying to get
\npeople to steal his ideas for years, but nobody was interested

\n

– He said that now “for some reason” he is becoming more concerned about
\nmatters of credit

\n

– He is a serious fan of Mathematica, the Mathematica Book, etc.

\n

– He made an effort again (he’s been trying for 20 years) to get me to
\ncoauthor a paper with him. He recognizes that he can’t write a credible
\nscientific paper, but he’s “sure he has some ideas I haven’t thought
\nof”. I told him that unfortunately I haven’t written a paper for 15
\nyears.

\n

– I told him that particularly when I’m in the Boston area, I’ll look
\nforward to chatting with him about physics etc.

\n

– He said he’s tried to interact some with Gerhardt ‘t Hooft, but that
\n‘t Hooft keeps on rushing off in traditional physics directions that Ed
\n(and I, by the way) think are stupid

\n

– He wanted to know if I really believed that all of physics etc. was
\nultimately discrete; he expressed the opinion that he and I may be the
\nonly people in the world who actually believe that right now

\n

– He told a bizarre story about how Don Knuth gave a talk at MIT
\nrecently on computers and religion, and how 1/4 of it was stuff that Don
\nhad heard about from Ed. Apparently Guy Steele asked a question about
\nhow Don’s stuff related to Ed’s, and Don said something meaningless.

\n

I talked to him a little more about the CA history stuff. He mentioned
\nthat around 1961 a certain Henry Stommel (sp?) told him that CA-like
\nmodels had been used in studying sand dunes in the 1930s. I have a
\nfeeling this may be another cat gut search, but perhaps we can follow
\nup. (You could email Ed at the appropriate time.)

\n

I asked Ed if he had ever looked at cryptography (as in NSA style stuff)
\nwith CAs. He said no. But that in the late 1960’s he had had a student
\nwho had studied ways to make counters out of JK flip flops … and that
\nthat person’s work had made something that Ed thought could be used for
\ncryptography. This was followed up by a certain Vera Pless
\nsubsequently.

\n
\n
\n

\n

I didn’t hear anything more from Ed for a while, though a public records search indicates that, yes, he had successfully “worked the system” to get $100k from the NSF for “The Digital Perspective Project”. And on May 1, 2001, I received a rather formal email from Ed (for some reason Americans born before about 1955 seem to reflexively call me “Steve”):

\n

\n
\n
\n
Subject: Workshop on the Digital Perspective 24-26 July, Washington DC
\n
Date: Tue, 1 May 2001 21:20:45 -0400
\n
From: Ed Fredkin
\n
To: Steve Wolfram
\n
\n

\n
\nWe are sending this email to invite you to an NSF-sponsored workshop on the
\nDigital Perspective in Physics planned for July 24th through the 26th,
\nTuesday, Wednesday and Thursday. It will be held in the NSF building,
\nArlington Virginia. Gerard ‘t Hooft has already agreed to present a paper
\nand we hope that you will also be willing to contribute. We intend to
\ncombine the papers presented at the workshop into a monograph that will be
\npublished later this year. Two earlier workshops on related subjects were
\nheld at Moskito Island and this was a central theme at a meeting held at
\nMIT’s Endicott house in 1982. Participants at previous meetings included
\nCharles Bennett, Richard Feynman, Ed Fredkin, Leo Kadanoff, Rolf Landauer,
\nNorman Margolus, Tomasso Toffoli, John Wheeler, Ken Wilson, Stephen Wolfram
\nand others.

\n

…\n

\n
\n

\n

When I didn’t immediately respond, Ed called my assistant, saying that he was “calling regarding a meeting he spoke with [me] about on the phone”. I responded by email later the same day:

\n

\n
\n
\n
Subject: I gather you called…
\n
Date: Tue, 15 May 2001 15:18:57 -0500
\n
From: Stephen Wolfram
\n
To: Ed Fredkin
\n
\n

\n
\nSorry for not getting back to you sooner….

\n

I myself am right now trying to work at absolutely full capacity to finish my
\nbook/project. I haven’t done any travelling at all for a long time, and won’t
\nuntil my book is done.

\n

And I also don’t yet have anything public to say about my work on physics.

\n

Hopefully by the end of the year my book will be done and I will have quite a
\nbit to say.

\n

However, it occurs to me that one or two of my assistants might be very good
\npeople to come to your workshop.

\n

Who all is coming?

\n

One person you should definitely invite is someone who has been an assistant of
\nmine, and now works part time for me, and part time on his own projects. His
\nname is David Hillman, and he’s been interested in discrete models of spacetime
\nfor a long time. (He got his PhD working on some kind of generalization of
\ncellular automata intended as a spacetime model.)

\n

I have two physics assistants, and one math one, who might be relevant for your
\nworkshop.

\n

Just let me know in more detail who might be coming, and I’ll try to figure out
\nthe correct person/people to suggest.

\n

Of course I’d love to come myself if I were a free man. But not until the book
\nis done.

\n

In haste,

\n

— Stephen\n

\n
\n

\n

Ed responded pleasantly enough:

\n

\n
\n
\n
Subject: RE: I gather you called…
\n
Date: Tue, 15 May 2001 17:08:39 -0400
\n
From: Ed Fredkin
\n
To: Stephen Wolfram
\n
\n

\n
\nHi,

\n

Sorry you can’t make it.

\n

About half of those coming are veterans of some Moskito Island workshop.
\nNewcomers include Gerry Sussman, Tom Knight, Gerard ‘t Hooft, John Negele,
\nJohn Conway, Raj Reddy, Jack Wisdom, Seth Lloyd, David di Vincenzo, plus a
\nnumber of students, etc. A couple of those mentioned are still struggling
\nwith scheduling issues.

\n

But, in any case, I would be pleased to have David Hillman come to the
\nworkshop. Send me his email address and I will send him an invitation.

\n

Best regards and good luck on the book!

\n

Ed\n

\n
\n

\n

I responded and suggested an additional person from our team for his workshop. Nearly a month passed with no word from Ed, so I pinged him asking what was going on. No response. It was a very busy time for me, and this wasn’t something I wanted to be chasing (I saw myself as doing Ed a favor by suggesting sending people to his workshop) … so I sent a slightly exasperated email:

\n

\n
\n
\n
Subject: your conference, again
\n
Date: Fri, 15 Jun 2001 06:08:39 -0500
\n
From: Stephen Wolfram
\n
To: Ed Fredkin
\n
\n

\n
\nLook … I’m now in a bit of an embarrassing situation: following your
\ninitial response, I told David Hillman and David Reiss about your conference
\n… assuming you’d want to invite them … and they both became quite
\ninterested in it. But they never heard from anyone about it. So now of
\ncourse they’re wondering what’s going on. And so am I. What should I tell
\nthem? I’m now embarrassed about having suggested this…

\n

This seems peculiarly un-you-like. I was thinking you must have been away
\nor something. But isn’t the conference coming up very soon?

\n

I hope everything’s OK…\n

\n
\n

\n

Still no response from Ed. A week later I called him, and we talked for two hours. It wasn’t clear why he hadn’t already reached out to the people I’d suggested, but he quickly said he would. And then Ed launched into telling me about the “astounding” cellular automaton models he said he’d just created that “had charge, energy, momentum, angular momentum, etc.”. He talked about things like the idea of what he called an “infoton” that would be an “information particle” that would “make Feynman diagrams reversible”. I explained why that didn’t make any real sense given how Feynman diagrams actually work. It was the same kind of conversation I had many times with Ed. I kept trying to explain what was known in physics, and he kept on coming back with things that, yes, I think I understood, but that seemed close to typical crackpot fare to me. But Ed seemed convinced he had discovered something great (though exactly what I couldn’t divine). And eventually—having obviously not convinced me of what he was doing “on its merits”—he just came out and said “It must be related to stuff you’re doing, one way or another”.

\n

I explained that I really didn’t think that was very likely, not least because I emphatically wasn’t trying to use cellular automata as models of fundamental physics. And with that, Ed launched into a long speech about giving credit, particularly to him. I explained that I was trying hard to write correct history, and reiterated some of the questions I’d asked him before. He didn’t really tell me more, but instead regaled me with stories (that I’d mostly heard many times before from him) about how he’d been the first to figure out this and that—apparently oblivious to historical research I tried to tell him. But eventually we both had to go—and the conversation ended pleasantly enough, with him confirming the email addresses for the two people for his workshop.

\n

As the workshop approached, the people from my team had made arrangements to go to Washington, DC—but still didn’t know where exactly the workshop was. With days to go, one of them simply called Ed to ask. But Ed told them that actually they couldn’t come, because “Raj Reddy says there is no room for you”. Really? No extra chair to be found? Ed was the organizer, wasn’t he? Why was he laying this on someone else? It seemed to me that Ed was playing some kind of game. But at that moment I was too busy trying to finish my book to think about it. (Now that I’m writing this piece, however, I realize that Ed was perhaps following an “algorithm” he’d established years earlier when he was proud to have organized a meeting to push forward his ideas about timesharing—by inviting just people who supported his ideas, and not inviting ones who didn’t. I don’t know if the meeting actually happened, or what went on there. I don’t think the writeup promised in the invitation and in the NSF contract ever materialized.)

\n

In January 2002 A New Kind of Science was off to the printer, and review copies were starting to be sent out. In late March a seasoned journalist named Steven Levy (who had written about my work on cellular automata in the mid-1980s) was talking to someone from my team and reported that Ed had told him that “Minsky had told [Ed] to publish his stuff on the web to stake out priority” before my book came out. (And it’s a pity Ed didn’t do that, because it might have made it clear to him and everyone else how different what he was saying was from what I was saying.) But in any case Levy said that Ed seemed to be saying the same things as he’d said 15 years ago—and Levy knew that regardless of anything else I’d done incredibly much more since then.

\n

After his conversation with Levy, Ed sent me mail:

\n

\n
\n
\n
Subject: The Book
\n
Date: Fri, 22 Mar 2002 16:38:29 -0800 (PST)
\n
From: Ed Fredkin
\n
To: Stephen Wolfram
\n
\n

\n
\nCongratulations on finishing!!!

\n

I ordered the book from someplace, so long ago I can’t
\nremember from who. I’m wondering if, when its
\npossible, I could get a copy in advance of whenever my
\nordered copy is going to appear. I just don’t want to
\nbe the last on the block to see it. Of course I’d be
\nhappy to pay if you can tell me how to do it.

\n

Thanks,

\n

Ed Fredkin\n

\n
\n

\n

The book was going to be published on May 14; on May 4 I signed a copy for Ed:

\n

Click to enlarge

\n

The book mentioned Ed a total of 7 times. (The person with the most mentions overall was Alan Turing, at 19; Minsky had 13; Feynman 10.)

\n

Ed never told me he’d received the book. And I’m not sure he ever seriously looked at it. But somehow he was convinced that since he knew it talked a lot about cellular automata, and had a section about physics, it must be about his big idea—that the universe is a cellular automaton. As one witty friend pointed out to me in connection with writing this piece, my book says only one thing about the universe being a cellular automaton: that it isn’t! But in any case, Ed apparently seemed to feel that I was stealing credit from him for his big idea—and, as I now realize, started an urgent campaign to right the perceived wrong, basically by telling people that somehow (despite all my efforts to describe the history) I wasn’t giving anyone enough credit and that “he was there first”. The New York Times rather diplomatically quoted Ed as saying “For me this is a great event. Wolfram is the first significant person to believe in this stuff. I’ve been very lonely”. It followed up by saying that “Mr. Fredkin, who said he was a longtime friend, said Dr. Wolfram had ‘an egregious lack of humility’”. (In some contexts, I suppose that might be a compliment.)

\n

In writing this piece I asked Steven Levy what Ed had actually said in the interview he did. His first summary in reviewing his notes was “He says he considers you a friend and then goes on endlessly about what an egomaniac you are”. But then he sent me his actual notes, and they’re somewhat revealing. Ed doesn’t claim he introduced me to cellular automata, perhaps because he realizes that Levy knows from the 1980s that that isn’t true. But then Ed tells the story about showing me reversible cellular automata, which I’d explained to Ed wasn’t true. Ed goes on to say that “Everyone who’s in science wants credit, driven probably by wanting to become famous. [Wolfram] has a larger than normal dose”. Ed says that when he had said that cellular automata underlie physics, I’d said that was crazy. (Yup, that’s true.) But then Ed said “Now he denies this”. Huh? Ed went on: “He’s a prisoner of some kind of overactive ego. I believe he might not know. Wolfram deserves loads & loads of credit, but he has this personality flaw”. And so on.

\n

A month later Ed writes to me:

\n

\n
\n
\n
From: Ed Fredkin
\n
To: Steve Wolfram
\n
Sent: Friday, June 14, 2002 2:48 PM
\n
Subject: ANKOS critics
\n
\n

\n
\nSteve,

\n

Sometime soon I’d like to get together and talk.

\n

I’ve read a lot of your book.

\n

Take a look at the draft of a little paper of mine (attached). I’d
\nappreciate comments.

\n

Ed F

\n

The following is my response someone else’s response [Gerry Sussman] to a review of ANKOS.

\n

My comments are only with regard to Wolfram’s ideas on modeling physics.
\nI don’t happen to like his network model but we are in agreement that
\nsome kind of discrete process might underlie QM.

\n

Not everything Wolfram says is wrong.

\n

\n

The ideas that some kinds of discrete space-time processes (such as
\nCA’s) might underlie physics or other processes in nature is the BABY.
\nEverything else in ANKOS (or missing from ANKOS) is the BATH WATER.\n

\n
\n

\n

Ed’s attached paper was basically yet another restatement of cellular automata as models of fundamental physics.

\n

A few weeks later there was a strange (if in some ways charming) incident when a reporter for the San Francisco Chronicle decided to investigate what seemed to be a science feud between Ed and me. After a nod to medieval metaphysicians, the article (under the title “Cosmic Computer”) opens with “Nowadays, with a daring that might have dazzled St. Augustine and St. Thomas Aquinas, two titans of the computer world argue that everything in the universe is a kind of computer.” After analogizing me to Britney Spears, the article goes on to say “The excitement has also brought tension to the long-standing friendship between Wolfram and Fredkin, who are now wrestling with one of the bigger bummers of any scientist’s life: a dispute over originality.” The article reports: “Last week, the two men had a long, heartfelt phone conversation with each other, in which they tried to resolve their strong disagreement over priority. The conversation was amicable, but they failed to reach agreement.”

\n

And so things remained until March 2003 when Ed sent the following:

\n

\n
\n
\n
Subject: Re: NKS 2003 Conference & Minicourse
\n
Date: Thu, 20 Mar 2003 17:24:59 -0500
\n
From: Edward Fredkin
\n
To: Stephen Wolfram
\n
\n

\n
\nDear Stephen,

\n

I guess I’m on a Wolfram mailing list for potential attendees for your
\nBoston conference. I hope you don’t mind a little plain speaking. I
\nconsider that I am a friend of yours and therefor I take the risk of
\ntelling the emperor about his new clothes. Of course, few others
\nwould do so as a friend. Please don’t be offended as the plain talking
\nthat follows is my attempt at trying to be constructive.

\n

Your work is acquiring a reputation amongst the scientific community
\nthat is much less than it deserves. I find myself often in the
\nposition of defending you, your work and your accomplishments against
\nthe negative views that many hold, even though they have little
\nunderstanding of the significance of what you have done. They are
\nturned off by your egregious behavior; it distracts much of the
\nscientific community rendering it barely possible for them to take you
\nseriously . You have invented and discovered quite a few things, but
\nso have others. You told me you would try to give credit in ANKOS
\nwhere credit was due; I believed you and I believe that you tried your
\nbest but nevertheless you failed miserably. I guess you simply didn’t
\nknow how. Consider this conference: Must this conference be a one man
\nshow or might it actually be better for the ideas in ANKOS and better
\nfor SW and his overall scientific reputation if it were a real
\nconference where others might address the same questions? Please don’t
\nkid yourself into thinking that no one else has anything original,
\nnovel, important or interesting to say.

\n

Of course, this so-called “…first ever conference” devoted to the “…
\nideas and implications…” of concepts found in ANKOS might be nothing
\nmore than a marketing tool for Mathematica and for sales of the ANKOS
\nbook. If so, you ought to call a spade “spade”.

\n

You’ve done enough things (and hopefully will continue to do so) to
\nensure your reputation as a pioneer in various areas. This flood of
\nself puffery simply detracts, in the minds of many whose opinions you
\nought to value, from the positive reputation you deserve.

\n

I’m not one of those whose opinion of your work is in any way affected
\nby your unfortunate behavior. I see and understand exactly what you’ve
\ndone and I know and understand what your work is based on. I am human,
\nso I find it interesting when you now and then claim to have discovered
\nan idea or fact that I personally explained to you when it was
\nperfectly clear at the time that to you, the idea was absolutely novel.
\nMy model of you is that your overpowering motivation results in your
\nmind playing tricks on you. I really believe that you actually forget;
\nthat you actually re-remember the past differently than it happened.

\n

But I am the eternal optimist. I believe that even Stephen Wolfram
\nmight someday come around and join the collegial scientific community
\nwhere you receive credit and give credit; both nearly effortlessly.
\nThe world actually might voluntarily heap honors on you as opposed to
\nSW having to orchestrate “conferences” for the glorification of SW and
\nall the ideas claimed by SW. No one knows better than me how slow and
\ntorturous this process can be for new and novel big concepts, but
\npatience and modestly [sic] still seems like the better path.

\n

Please try to not be offended. I actually mean well. If you ever have
\nan actual, real conference, invite me to be a speaker; I’ll come. If I
\norganize another conference you can rest assured that you will be
\ninvited again (as you were for the NSF Workshop) and I hope you will
\ncome to talk about your ideas and maybe — maybe even stay to hear what
\nothers have to say on the subject. It’s not too healthy to the
\nscientific mind to be the only real speaker at conferences you organize
\nand hype for yourself.

\n

Among the very few who really are able to appreciate what you’ve done,
\nI am one of your greatest supporters. But I am not your average person
\nwith more or less normal reactions. When you reach for extra glory
\nand credit by stealing one of my ideas, my reaction is: “I admire your
\ngood taste”.

\n

Best regards

\n

Ed F

\n

On Thursday, March 20, 2003, at 02:19 PM, Stephen Wolfram wrote:
\n
\n> In June of this year we’re going to be holding the first-ever
\n> conference devoted to the ideas and implications of A NEW KIND OF
\n> SCIENCE. I think it’ll be an exciting and unique event. And if you’re
\n> interested in any facet of NKS or its implications, you should plan
\n> to come!
\n>
\n> I’ll be giving a series of in-depth lectures to explain the core
\n> ideas of NKS. There’ll be more specialized sessions exploring
\n> implications and applications in areas such as computer science,
\n> biology, social science, physics, mathematics, philosophy, and future
\n> of technology. And there’ll also be workshops and case studies about
\n> such issues as modelling, computer experimentation, defining NKS
\n> problems, NKS-based education–as well as a gallery of NKS-based
\n> art pieces.
\n>
\n> I’d expected that it’d be a few years before it would make sense to
\n> start having NKS conferences. But things have gone faster than I
\n> expected, and the enthusiasm and energy we’ve seen in the ten months
\n> since the book was published has made it clear that it’s time to have
\n> the first NKS conference.
\n>
\n> In planning NKS 2003, we want to cater to as broad a range of
\n> attendees as possible. There’ll be many professional scientists
\n> coming, as well as technologists and other researchers from a very
\n> wide range of fields. There’ll also be a large number of educators
\n> and students, as well as all sorts of individuals with general
\n> interests in the ideas and implications of NKS.
\n>
\n> We’ll be holding NKS 2003 near Boston over the weekend of June 27-29,
\n> 2003. There’s more information and registration details at
\n> http://www.wolframscience.com/nks2003
\n>
\n> It’s going to be an extremely stimulating weekend–and a unique
\n> opportunity to meet a broad cross-section of people interested in new
\n> ideas.
\n>
\n> I hope you’ll be able to be part of this pioneering event!
\n>
\n>
\n> — Stephen Wolfram
\n

\n
\n

\n

I responded:

\n

\n
\n
\n
Subject: Re: NKS 2003 Conference & Minicourse
\n
Date: Sat, 22 Mar 2003 22:19:33 -0500
\n
From: Stephen Wolfram
\n
To: Edward Fredkin
\n
\n

\n
\nEd —

\n

I must say that I am reluctant to respond to a note like the one below, but
\nit seems a pity to let things end this way.

\n

I can tell you’re very angry … but beyond that I really can’t tell too
\nmuch.

\n

I’d always thought we had a fine, largely social, relationship. We talked
\nabout many kinds of things. It was fun. Occasionally we talked about
\nscience. In the early 1980s I learned a few things about cellular automata
\nfrom you. None were extremely influential to me, but they were fine things
\nthat you should be proud of having figured out—and in fact I took some
\ntrouble to mention them in the notes to NKS.

\n

You also told me some of your thinking about fundamental physics. I was (I
\nhope) polite, and tried to be helpful. But I always found what you were
\nsaying quite vague and slippery—and when it became definite it usually
\nseemed very naive. I think it’s a great pity that you’ve never taken the
\ntime to learn the technical details of physics as it’s currently practiced.
\nThere’s a lot known. And if you understood it, I think you’d be able to
\ntell quite quickly which of your ideas are totally naive, and which might
\nactually be interesting.

\n

I think it’s also a pity that—so far as I can tell—you’ve never really
\ntaken the time to understand what I’ve done. It’s in the end pretty
\nnontrivial stuff. It’s not just saying something like “the universe is a
\ncellular automaton” or “I have a philosophy that the universe is like a
\ncomputer”. It’s a big and rich intellectual structure, built on a lot of
\nsolid results and detailed, careful, analysis. That among other things
\nhappens to give a bunch of ideas about how physics might actually
\nwork—that have (so far as I know) almost nothing to do with things you’ve
\nbeen talking about.

\n

I do agree with your belief that the universe is ultimately discrete. But
\nof course many people have for a long time said that they thought the
\nuniverse might at some level be discrete. Some of those people (like
\nWheeler, Penrose, Finkelstein, etc.) are sophisticated physicists, and what
\nthey’ve said has lots of real content—it’s not just vague essay-type
\nstuff. Now, I don’t happen to think what they’ve specifically proposed is
\ncorrect. But you would be completely wrong to think (as you seem to) that
\nsomehow the idea that the universe might be discrete originated with you.

\n

I really encourage you to read NKS in detail, including the notes at the
\nback. I think there’s a lot more there than you imagine. And I think if
\nyou really understood it, you would be completely embarrassed to write a
\nnote like the one below.

\n

You’ve never struck me as being someone who is terribly interested in other
\npeoples’ ideas. And that’s of course fine. But you shouldn’t assume you
\nknow their ideas just on the basis of a few buzzphrases or some such. In
\nsome areas of business, that approach often works. Because, as we both
\nknow, the ideas typically aren’t that deep. But it won’t work with a
\ncharacter like me doing science. There’s too much nontrivial content. You
\nhave to actually dig in to understand it. And from the things you say you
\nobviously haven’t.

\n

For twenty years I thought we had a fine personal relationship. I thought
\nit was a little odd that you seemed to go around telling people that you had
\nintroduced me to cellular automata. We talked about this a few times, and
\nyou admitted this wasn’t a true story. But while I thought it was a little
\nunreasonable for you to keep on saying something you knew wasn’t true, I
\ndidn’t pay much attention. It never really got in the way of our
\nrelationship.

\n

And then there was the incident of your NSF-funded conference. You invited
\nme. I said I couldn’t come. And suggested two alternates. You said fine.
\nBut then you never contacted these people. Which was rather embarrassing
\nfor me. And then, when David Reiss contacted you, you told him the
\nconference “was full”.

\n

Later, when we talked about it, you admitted that that was a lie—and then
\nblamed the lie on Raj Reddy.

\n

Frankly, I was flabbergasted by all this. That’s not the kind of
\ninteraction someone like me expects to have with a seasoned high-level
\noperative like yourself. Yes, that’s the kind of thing some sleazy young
\nbusinessperson might do. But not a mature businessperson who has run
\ncompanies and things.

\n

I still have no idea what you were thinking of. But it thoroughly shook my
\nconfidence in you as someone I could interact straightforwardly with.

\n

And then, of course, there’s the question of what you’ve said to journalists
\netc. about NKS. In detail, I don’t have much idea. But something fishy
\nwas surely going on. I haven’t gone and studied all the quotes from you.
\nBut certainly my impression was that you were trying to claim that really
\nlots of key things in NKS were things you had done or said first.

\n

You know that I tried to research the history carefully. And unless I
\nmissed something quite huge, your contributions to NKS were extremely minor,
\nand are certainly accurately represented in the history notes. Now of
\ncourse if you don’t actually understand what I’ve done in NKS, that may be
\nhard to see. But I can’t really help you with that.

\n

OK, where do we go from here?

\n

We talked at some length when that reporter from a San Francisco paper was
\ntrying to write a story about you and NKS. I thought we had a decent
\nconversation. But then, so far as I could tell, you went right ahead and
\ntold the reporter—again—exactly a bunch of things we’d agreed in our
\nconversation weren’t true.

\n

It was the same pattern as with telling people that you’d introduced me to
\ncellular automata. And it resonated in a bad way with the lie you told
\nabout your conference.

\n

I would have expected vastly better from you. I must say that I was
\npersonally most disappointed. And I concluded with much regret that I must
\nhave seriously misjudged you all these years.

\n

I would like nothing more than to be able to mend our relationship, and go
\nback to the kind of pleasant social interactions we have always had.

\n

How can that be achieved? Perhaps it’s impossible. But one step is that
\nyou might actually try to understand what I’ve done in NKS. That would
\nsurely help.

\n

— Stephen\n

\n
\n

\n

\n
\n
\n
Subject: Delayed reply
\n
Date: Thu, 3 Apr 2003 23:34:17 -0500
\n
From: Edward Fredkin
\n
To: Stephen Wolfram
\n
\n

\n
\nStephen,

\n

I have been traveling and more recently have had my time gobbled up by
\na most urgent matter.

\n

I appreciate your quick reply to my email and I will get back to you
\nsometime soon. Rather than trying to respond to everything you brought up,
\nI will be limited to dealing with a couple of issues at a time.

\n

What I can tell you is that I am not angry, and was not angry or upset.
\nI have always been a non-emotional observer with regard to whatever it is
\nthat comes my way. That’s just my nature. It has come in handy at times,
\nsuch as when someone’s stupid mistake caused the single engine jet fighter
\nI was flying have an engine fire on take off. This required shutting down the
\nengine and taking other drastic actions very quickly; no time to get mad.

\n

The gist of my comments to you was not related to the work you
\ndocumented in NKS, but rather to the style and methodology you are using
\nwhile trying to get people to understand and appreciate what it is you
\nhave done. I certainly agree with the fact that it is extraordinarily
\ndifficult to get the scientific establishment to pay attention, listen,
\nunderstand and appreciate what you’ve done. Nevertheless, I think there
\nmight be a better approach to that problem than the one you are following.

\n

So, as soon as I can get a little breathing room, I’ll respond to some
\nof your comments. I do value our friendship and whatever I do in this regard
\nwill be an attempt at honest and unemotional communication with the goal of
\nsome better mutual understanding.

\n

By the way, I have taken the time to read and understand what you’ve
\ndone in NKS. I’m pretty sure that I am better able than most to appreciate
\nthe effort, persistence and creativity that went into that work.

\n

You have made some comments about me and my own work, and I wonder what
\nyou actually know about it beyond our conversations and the things you
\nreferenced in NKS.

\n

As soon as I can get some time I’ll continue with some further thoughts.

\n

Best regards,

\n

Ed

\n
\n
\n

\n

Ed didn’t send the promised followup. But a couple of months later New Scientist sent our media email address a note titled: “cover feature on Fredkin, Wolfram right to reply”, which asked for “comment on the suggestion that you first became familiar with cellular automata first at Fredkin’s lab in the 1970s and that examples in A New Kind of Science came out of work done in the lab”. I told Ed he should correct that—and he responded to me:

\n

\n
\n
\n
Subject: Re: cover feature on Fredkin, Wolfram right to reply.
\n
Date: Thu, 29 May 2003 13:55:02 -0400
\n
From: Ed Fredkin
\n
To: Stephen Wolfram
\n
\n

\n
\nStephen,

\n

I carefully and clearly told the author of the NS article that to my
\nknowledge it is not true “… that Wolfram first became familiar with
\ncellular automata at Fredkin’s lab in the ’70’s…” and further that
\nyou already knew about CA’s.

\n

My guess is that magazines see value in controversy and they would like
\nto attribute statements to each of us that helps them titillate their
\nreaders. I tried in every way I could to correct any wrong impressions
\nthe author had. But what they end up doing is beyond my control.

\n

As to cracking the fundamental theory of physics, I did read and did
\nunderstand what you wrote about in NKS, however my interests lie in
\nmodels that are regular and based on a simple underlying Cartesian
\nlattice. The models I have been working on for the past few years are
\ncalled “Salt” as they are CA’s similar to an NaCl crystal. You can
\nread about it at www.digitalphilosophy.org.

\n

My approach to being consistent with QM, SR and GR is related to the
\nfact that CA models of physics can exactly conserve such quantities as
\nmomentum, energy, charge etc. By means of a variant of Noether’s
\nTheorem, the physics of such CA’s can exhibited the all the symmetries
\nwe currently attribute to physics, but doing so asymptotically at
\nscales above the lattice.

\n

Thus, in my concept of a theory of physics, translation symmetry,
\nrotation symmetry etc. would all be violated as we currently understand
\nis true for time symmetry, parity symmetry and charge symmetry .

\n

No one suggests that you should agree with all my ideas, however your
\ncomment in your prior email to me is unnecessarily condescending:

\n

> “I think it’s a great pity that you’ve never taken
\n> the time to learn the technical details of physics as it’s
\n> currently practiced. There’s a lot known. And if you understood
\n> it, I think you’d be able to tell quite quickly which of
\n> your ideas are totally naive, and which might actually be interesting.”

\n

What is certain is that there’s no “great pity” necessary. I actually
\ndo know a lot about the technical details of physics. In any case,
\nthirty years ago Feynman thought that I needed to learn more about
\ncertain aspects of QM. He was specific in what he felt was everything
\nmore that I needed to know (in order to make progress with my CA
\nideas). He offered to work with me, which was accomplished during the
\ncourse of the year I spent at Caltech (1974-1975). I studied, learned
\nmore about QM and passed the final exam that Feynman gave me. While
\nwe argued a lot, Feynman never accused me of having naive ideas.

\n

As to NKS 2003, it doesn’t make a lot of sense for me to come to be a
\nmember of the audience. If you would like me to participate in some
\nmeaningful way, let me know.

\n

Best regards,

\n

Ed F

\n
\n

\n

And after that exchange, Ed and I basically went back to being as we had been before—having pleasant interactions, without any particular scientific engagement. And in a sense for many years I kept out of Ed’s scientific way—not seriously working on physics again until 2019.

\n

Since 2002 I’d been living in the Boston area, so Ed and I ran into each other more often. And although Ed’s behavior over A New Kind of Science had disappointed and upset me, it gave me a better understanding of Ed as a human being, and a vulnerable one at that.

\n

The Later Ed

\n

It was always a little hard to tell just what was going on with Ed. In July 2003, for example, he wrote to me:

\n

\n
\n
\n
Subject: Gunkel
\n
Date: Thu, 24 Jul 2003 19:07:56 -0400
\n
From: Ed Fredkin
\n
To: Stephen Wolfram
\n
\n

\n
\nStephen,

\n

First I must apologize for this long letter. Pat Gunkel sent me an
\nemail telling of your visit. It prompted me (who hardly ever writes
\nanything) to type up my thoughts for whatever they’re worth.

\n

\n

You might be surprised at the number of wise and intelligent people who
\nreally appreciate Pat and his works. Yet after more than 30 years of
\nfitful, diverse yet nearly continuous support, Pat has come to a
\nsituation that, to him, looks like the end of the line.

\n

\n

I, unfortunately, am no longer in a position to personally provide the
\nkind of modest support that Pat needs to continue his church-mouse kind
\nof existence.

\n

\n

There is no doubt that Pat can be a difficult person to help, but I
\nnotice that he has mellowed with age. Of course, Wolfram Research is
\nnot a charitable institution. But I believe that Pat’s ideas on
\nideonomy are really important and that those ideas may form the basis
\nof interesting future applications. The point of all this is that if
\nwhat Pat is doing seems interesting to you, some arrangement with
\nWolfram Research might make sense.

\n
\n
\n

\n

(True to form, Gunkel followed up with a very forthright note, including a scathing critique he’d written of A New Kind of Science—as well as of Ed’s theories. That wouldn’t have deterred me, but I couldn’t see anything Gunkel could actually do for us, so I never pursued this.)

\n

But did Ed’s note imply that Ed was running out of money? I’d always assumed some kind of vast business empire lurking in the background, but now I wasn’t sure.

\n

I saw Ed only a few times in the next couple of years—at events like a Festschrift for Sulak and a bat mitzvah for one of Feynman’s granddaughters. But as usual, he was eager to tell stories, some of which I hadn’t heard before—mostly about things far in the past. He said that in the early 1960s John Cocke had stolen the idea of RISC architecture from his murdered friend Ben Gurley, though it had taken him two decades to get it taken seriously. He said that around the same time he’d been pulled in by the Air Force to help with analysis of blast waves from nuclear tests (and that story came with descriptions of B-52s doing loop-the-loop maneuvers when they dropped atomic bombs). He said that he’d once demoed the Muse music system (which, he emphasized, he, not Minsky, had invented) to an astonished audience in the Soviet Union. He said that he’d advised Richard Branson on his transatlantic balloon trip, telling him his butane burners weren’t correctly mounted—and in fact they fell off. And so on.

\n

In 2005 Ed told me he’d been working with a programmer in California named Dan Miller (who’d developed audio compression software [and been at the NKS 2003 conference that Ed had been so upset about]) on the new 3D cellular automaton he’d invented that he called the “SALT architecture” because its pattern of updates were like the Na and Cl in a salt crystal.

\n

But then in 2008 Ed told me he’d sold his island—presumably relieving whatever financial issues he’d had before—and suddenly Ed started to show up much more. He told me (as he did quite a few times) that he was working on a book (which never materialized). He told me he was teaching a course at Carnegie Mellon on the “Physics of Theoretical Computation”—which was apparently actually a very-much-as-before “engineering-style” effort to explore building features of physics from a cellular automaton, now with his SALT architecture. He invited me to a dinner at his house in honor of ‘t Hooft, photographed here with Ed, me and Sulak:

\n

Click to enlarge

\n

That fall, Ed came to the Midwest NKS Conference in Indiana, here photographed in a discussion with Greg Chaitin, me and others:

\n

Click to enlarge

\n

I would interact with Ed quite regularly after that—most often with him telling me about his use of Mathematica and soon Wolfram|Alpha. In 2012 Ed—now aged 78—sent me a nice “I have an idea” email (I made the requested introduction, though I’m not sure if this ever went anywhere):

\n

\n
\n
\n
Subject: Alpha and Problem Solving
\n
Date: Fri, 19 Oct 2012 20:12:01 +0000
\n
From: Edward Fredkin
\n
To: Steve Wolfram
\n
\n

\n
\nSteve,

\n

The first thing I taught at MIT was a course in general problem solving (in 1968).
\nI’m now developing a new course on General Problem Solving which I expect to
\noffer first at Harvard’s HILR program. Part of the motivation came from watching
\nJoyce struggle with a Harvard course on Chemistry, where a lot of the homework
\ninvolved units conversions. I noticed that Alpha promptly solved many of Joyce’s
\nhomework problems including some involving chemical reactions. (The course
\nwas really for students planning to take the MCAT Exam in order to get into Medical
\nSchool). One clue that you might give to the Alpha developers, is to work toward
\ngetting Alpha to have more of the capabilities necessary to pass different standard
\ntests that involve various kinds of quantitative analysis. (Of course, you might
\nhave already done so.)

\n

You might recall that I discussed the issue of units conversion with you long ago
\n(before Mathematica), and you described the idea you then had that turned into
\nConvert in Mathematica.

\n

In any case, Alpha is fantastic, and getting better all the time. My plan is that every
\none of my students must use Alpha for every problem that involves numbers, along
\nwith some that don’t involve numbers. My motto is John McCarthy’s dictum:
\n“Those who refuse to do arithmetic are doomed to talk nonsense!” However, with
\nAlpha, the problem solver doesn’t have to do the arithmetic or the units
\nconversions; Alpha can do it!

\n

It would be helpful if I could get a little bit of cooperation from someone in the Alpha
\ngroup. Basically, I will want to talk to an Alpha expert from time to time to make sure
\nI’m taking advantage of the best that Alpha can do along with resources already
\ndeveloped for introducing Alpha to new users. My initial students will be drawn from
\na group of retirees who, while clearly above average in intelligence, may have few
\nrecently used skills in mathematics. I also expect that almost all of my initial
\nstudents will be first time Alpha users. Again, I might profit from discussion
\nwith someone who has thought about how to introduce Alpha to beginners.

\n

Let me know what you think or, if you like, we could get together to talk about it.

\n

Best regards and Congratulations!

\n

Ed\n

\n
\n

\n

In 2014, when I recorded some oral history with Ed—now age 80—he was again brimming with ideas. The one he was most excited about had to do with weather prediction. It started from the observation that most smartphones have pressure sensors in them. Ed’s idea was to use these—and more—to create a sensor net that would continuously collect billions of pressure measurements, to be fed as input to weather forecast codes. Channeling his lifelong interest in reversible computing he imagined that the codes could be made reversible, and that running backwards from an incorrect prediction could tell one where more data had to be collected. Then Ed imagined doing this by having tiny balloons all over the place—with nothing that would cause trouble if a plane ran into it. He had a whole plan for partners he wanted to get (and, yes, he wanted us to be part of it too). And in typical Ed fashion, it was all laced with stories:

\n
\n

You know, I had this personal experience with weather. I was flying a glider along at 16,000 feet, and I encountered sink. You know, sink is wind blowing down. And the speed of the sink was 10,000 feet a minute. I was at 16,000 feet. And two minutes later, I was on the ground landing. Not on purpose. You know my attitude was—if I don’t see a big grading on the ground—[the wind] can’t keep going this way all the way down, so I won’t be killed. Actually, in that same storm, one of the pilots was killed.

\n
\n
\n

\n
\n
\n

The weather people just aren’t into the vertical movement of air. They do everything in layers. But this went through a lot of layers all at once in an organized fashion. So the point is that to talk about thousands or even millions of sensors makes no sense. You’re not going to do good weather until you get billions of sensors. That’s my opinion.

\n
\n

We talked about whether sensitive dependence on initial conditions destroys all predictability in fluid dynamics. I have theoretical and computational reasons to think it doesn’t. But Ed had a story:

\n
\n

There’s a mountain in California I happen to know, and I have a picture of a cloud street that starts on that mountain because it has a very peculiar geometry, and then runs for 2,000 miles.

\n
\n
\n

So this particular mountain has an area of its rock that faces towards the east and it’s big. And what happens is when the Sun is shining on that and the wet wind is coming from the Pacific and so on, you get this big cumulus cloud that flows back this way, and then you get another one and it pulses. You get one after another. And these are very stable things and they travel a very long way. So my point is that amidst all the randomness there’s a lot of order that can be found and understood. There are regions that have funny properties. They’re much more temperature stable. There’s like islands of stability. And things like that get ignored by everything people are doing today, you know what I mean?

\n
\n

I would send things I’d written to Ed. I didn’t really think he’d read them. But I thought he might at least enjoy their concepts. And often he would respond with ideas of his own. I sent him an announcement about our Tweet-a-Program project (now reconfigured because of Twitter changes) with the one-line comment (reflecting his “best programmer” self-characterization): “A new frontier of programming prowess?” He responded, in typical Ed fashion, with an idea—that’s actually a little reminiscent of modern AI image generation:

\n

\n
\n
\n
Subject: Re: Tweet-a-Program
\n
Date: Fri, 19 Sep 2014 21:25:47 +0000
\n
From: Edward Fredkin
\n
To: Stephen Wolfram
\n
\n

\n
\nHi,

\n

I like it! As usual, it gave me ideas that might be outside of your
\ncurrent concept.

\n

We should talk sometime, so that I can explain something closely related
\nto [Tweet-a-Program] but decidedly different and perhaps even more fun.
\nStrangely, it has to do with Haiku.

\n

What I have figured out is that there could be a new kind of Haiku, where
\nthe text is interpreted by Mathematica to generate an image.
\n …
\nThe trick will be having the image reflect something of the Haiku meaning,
\neven if only abstractly. I don’t know how to do this so that it does the perfect
\nthing every time, but I have thought of something that could be fun, and a
\nperson could become skilled at creating Mathematica Haikus that seem to reflect
\nsome aspects of the feeling of the words in an image with some increasing
\nprobability of doing it well, as a result of practice.

\n

\n

Ed\n

\n
\n

\n

Late in 2014 Ed sent me another piece of mail saying he was starting a project to produce a “new cellular automaton system”—and he wanted to use our technology to do it. He also sent me a paper he’d written about his SALT cellular automaton:

\n

Click to enlarge

\n

Finally—and without my help—Ed seemed to have mastered the art of academic papers. This one was on the arXiv preprint server. Others—with titles like “An Introduction to Digital Philosophy”—had appeared in academic journals. (Ones with titles like “A New Cosmogony” and “Finite Nature ” were more privately circulated.) But what most struck me about this particular paper was that—for the first time—it seemed to have actual images of cellular automaton behavior. Ever since those few minutes with the PERQ computer on Ed’s island in 1982 I hadn’t seen Ed ever show anything like that. And now Ed was again chasing that old question Minsky had asked, of making a circle with a cellular automaton.

\n

At the time, I didn’t have a chance to see what Ed had actually done, and whether he’d finally solved it. But in writing this piece, I decided I’d better try to find out. The actual rule—that Ed and Dan Miller called “BusyBoxes”—is quite complicated, involving knight’s-move neighborhoods, etc. Their claim was that starting with a string of cells in a particular configuration, the average of their positions would trace out what in the limit of a long string would be a circle:

\n
\n
\n

\n

At first it looks like a kind of magic trick (and no, nothing is bouncing off any “walls”; the direction changes are just a consequence of the initial pattern of cells). But if you keep all the locations that get visited, things start to seem less mysterious—because what you realize is that the “basket” that gets “woven” is actually just a cube, viewed from a corner:

\n
\n
\n

\n

Where does the apparent circle come from? The details are a bit complicated—and I’ve put them in an appendix below. But suffice it say to that Ed’s old nemesis—calculus—comes in very handy. And in fact it lets one show that although one gets almost a circle, it’s not quite a circle; even with an infinite string, its radius is still wiggling by about 0.5% as one goes around the “circle”:

\n
\n
\n

\n

And—as we’ll see below—remarkably enough one can get a closed-form result for the amount of wiggliness (here computed as the ratio of maximum to minimum radius):

\n
\n
\n

\n

In earlier years, Ed might have tried to say that generating a circle (which this doesn’t) was tantamount to showing that a cellular automaton could reproduce physics. But by now I think he realized that it was really much more complicated than that. And he wasn’t mentioning physics much to me anymore. But—perhaps not least because many of his longtime interlocutors had by then died—he was interacting with me more than before. And perhaps he was even beginning to think that I might have a bit more to contribute than he’d assumed.

\n

In December 2015 I sent Ed a piece I’d written to celebrate the bicentenary of Ada Lovelace, and he responded:

\n

\n
\n
\n
Date: Fri, 11 Dec 2015 15:58:14 +0000
\n
From: Edward Fredkin
\n
To: Stephen Wolfram
\n
\n

\n
\nStephen,

\n

I was truly blown away by your essay re Ada Lovelace! You’ve got a lot
\nmore to give the world than I had imagined, and I, more than anyone else,
\nappreciate what you might still be capable of accomplishing.

\n

It’s too bad that some persons at MIT, for far too long, hung onto one
\ndimensional views focussed on what Macsyma might have been. My own
\nimpressions have always been different, I recognized your potential long
\nago and consequently invited you to one of my Mosqito Island conferences
\nsome 3.5 decades ago.

\n

In any case, much of what Mathematica makes possible is very important and
\nvaluable to me. As you know I was an early user and continue to be a
\nuser.

\n

Many of my interests have run along many paths opened up by activities you
\nhave instigated at Wolfram. Wolfram Alpha and its connections to Siri,
\nare examples.

\n

Your new book “An Elementary Introduction to the Wolfram Language” (I
\ndon’t yet have a hard copy) fits in with a project I had in mind for my
\ngrandchild Robert, who at age 6 already seems to be extraordinarily
\ntalented mathematically.

\n

To cut to the chase, I want to make a proposal: Although I’m too old to
\nbe a regular employee, I’d nevertheless like to have an association with
\nWolfram, where I might be able to contribute ideas, and solve problems
\n(I’m still quite good at that).

\n

I won’t need much from you other than your opening the door to my
\ninvolvement at Wolfram. What I have in mind would be an arrangement
\nwhere I could work for Wolfram, with some kind of arrangement other than
\nfull time employment.

\n

I’ve attached something I wrote recently.

\n

Ed\n

\n
\n

\n

Gosh! That was an unexpected development. Flattering, I suppose. But my main reaction was a kind of sadness. Yes, after all these years, Ed had finally read something I’d written. But somehow his response sounded like he was surrendering. This wasn’t the “I-want-to-do-everything-for-myself” Ed I had known all this time. This was an Ed who somehow felt he needed us to support him. And while our company has been able to absorb a great many “unusual” people—with terrific success—Ed seemed like he was pretty far outside our envelope.

\n

At the time, I didn’t look at the attachment Ed sent with his email. But opening it now adds to my sense of sadness. It was a 13-page document about a system Ed imagined that would help people with “various forms of cognitive disabilities”, including a section on “Dementia and Alzheimer’s”:

\n

Click to enlarge

\n

It wasn’t until 2017 that Ed explicitly mentioned to me that his short-term memory was failing—though in talking to him it had been increasingly obvious for several years. He said he’d joined a group of people who were writing their memoirs. I told him I’d look forward to seeing his, though I’m not sure he ever made much progress on them.

\n

Ed continued to send me ideas and proposals. There was a very Ed-like “global idea” about creating a system “GM” (presumably for “General Mathematician”) that would effectively “learn all of mathematics” by automatically reading math books, etc. (yes, definite overtones of what’s happening with LLM-meets-Wolfram-Language):

\n

Click to enlarge

\n

Later there were several pieces of mail about a new idea for factoring integers. In the first of them (from 2016), Ed told me that when the NeXT computer first came out (in 1989) he’d used Mathematica on it to simulate a reversible hardware multiplier. And being reminded of this by a historical piece I’d written, he said it had “started me thinking, again, about that problem and I had a new insight that appears to so greatly reduce the complexity of a reversible multiplier so as to possibly make it better at factoring large integers than current algorithms.” He wrote me about this several more times, suggesting various kinds of collaborations. Finally, in 2018 he told me how the method worked, saying it involved doing reversible arithmetic using balanced ternary. (Strangely enough, years earlier Ed had told me about Soviet computers that also used balanced ternary.)

\n

I think that was the last technical conversation I had with Ed. A couple of years later I sent him the book about our Physics Project with the inscription:

\n

Click to enlarge

\n

And I would see him at least once a year at the Boston-area physics get-together organized by Boston University. He would always tell me stories. Often the same stories, and sometimes stories about me. And indeed as I was writing this piece I actually found a video Ed made in November 2020 that has such a story, albeit by this point seriously muddled (and, no, I’ve basically never “run” a cellular automaton by hand in my life!):

\n
\n

I used to organize meetings in the Caribbean and I did this because I had an island in the Caribbean … I invited Wolfram to come down. Wolfram had done pioneering work in cellular automata. … He was a great guy, you know, and I wanted him to get on the bandwagon … He shows up at the meeting and he had done all his work by hand as had everyone else in cellular automata. He didn’t think of using a computer. [!] I had a display processor that I modified to be able to run a cellular automaton with the stuff that it used to put text up on the screen. And so I’m showing him a cellular automata running at 60 frames a second continuously like a movie. This was 10,000 times faster than doing it by hand which is what he’d always done. He never thought of using a computer to do cellular automata and he turns around and walks out and and he left the island and went back to someplace else. So [later] I went to his meeting at Los Alamos and I ran into him again and he was now doing computer work. And I said to him “How come in all your work you don’t have a reversible [rule]”, and he says to me “Oh, reversible ones are all trivial”. And I went up and this is the most telling thing about his intellect: he’s a very smart guy [and when I] showed him how he could change his rule slightly and make it reversible his eyes just about popped out of his head and he knew I was correct.

\n
\n
\n

I may have introduced him to this field but what he has done is he is far better than I at getting other people involved. I’ve never bothered and I don’t have the talent that he has for that. What he did was he came up with similar ideas and initially he didn’t give me the credit I thought I deserved. But it became apparent to me that he did this independently and he’s better at writing things and better at hiring bright people who can do things than I ever was.

\n
\n

And right after that, Ed ends the video with:

\n
\n

As I look back on my career I’ve had a fantastic life and I’m not unhappy about any aspect of it because, you know, I’ve accomplished everything I might have done and in spite of various handicaps—like not being a writer—I still have done a lot and the world, uh, understands me, I think, and appreciates what I’ve done.

\n
\n

When I saw Ed in 2022 he wasn’t able to say much. But, though it was a struggle, he was keen to make one point to me, that seemed to matter a lot to him: “You’ve managed to get people to follow you”, he said “I was never able to do that”. I saw Ed one last time this May. Joyce explained that Ed had “bumped his head”, and, in a very Ed-like way, she was avoiding a repeat by getting him to wear a bike helmet. She wanted someone to snap a picture of me and her with Ed:

\n

Click to enlarge

\n

Six weeks later, Ed died, at the age of 88.

\n

I went to see Joyce and Rick a few weeks later, among other things to check facts for this piece. I’d heard from Ed that his ancestors had provided wood for the imperial palace in St. Petersburg. But I’d also heard from someone else that Ed had said he was descended from Mongolian royalty. And as I was about to leave, I thought I might as well ask. “Oh yes”, they said. “And Ed’s father even wrote a historical novel about it”. And they showed me two books (both from the mid-1980s):

\n

Click to enlarge

\n

I’m not sure who Sarah, Queen of Mongolia was, but the book blurb claims that Ed’s father was her great-great-great-grandson—and goes on to speak of the “strong family inheritance of a mind that analyzes not only the injustice of human oppression but offers realistic and beneficial solutions”.

\n

Summing Up Ed

\n

“Can that really be true?” I often asked myself when hearing yet another of Ed’s implausible stories. And of course it didn’t help that stories he told—even to me—about me weren’t true. But the remarkable thing in writing this piece is that I’ve been able to verify that a lot of Ed’s stories—implausible though they may have sounded—were in fact true. Yes, they were often embellished, and parts that didn’t reflect so well on Ed were omitted. But together they defined a remarkable tapestry of a life.

\n

It was in many ways a very independent life. Ed had friends and family members to whom he stayed close throughout his life. But mostly it was “Ed for himself, against the world”. He didn’t want to learn anything from anyone else; he wanted to figure out everything for himself. He wanted to invent his own ideas; he wasn’t too interested in other people’s. In a rather Air-Force-pilot kind of way (“eject or not?”) he liked to be decisive—and he liked to be incisive too, always figuring out a clear, simple thing to say. Sometimes that came across as naive. And sometimes it was in fact naive. But mostly Ed didn’t seem to mind much; he would just go on to another idea.

\n

Ed was a great storyteller, and an engaging speaker. For some reason he developed the theory that he couldn’t write—but there’s ample evidence, going back even to his teenage years, that this wasn’t true. If there was a problem, it was with content, not writing. And the issue with the content was that it tended to just be too Ed-specific—too insular—and not connected enough for other people to be able to understand or appreciate it.

\n

I don’t know what Ed was like as a manager; I rather suspect he may have suffered from trying to be a bit too clever, with too many ideas and too much gamification. In the end, he felt he’d failed as a leader, and perhaps that was inevitable given how independent he always wanted to be. Despite his stints as an academic administrator and as a CEO, Ed was in the end fundamentally a lone warrior (and problem solver), not a general.

\n

And what about all those ideas? Most never developed very far. Some were pretty wild. But many had at least a kernel of visionary insight. The details of the universe as a cellular automaton didn’t make sense. But the idea that the universe is somehow computational is surely correct. And spread over the course of more than six decades, Ed spun out nuggets of ideas that would later appear—usually much more developed—in a remarkable range of areas.

\n

Ed projected a kind of personal serenity—yet he was in many ways deeply competitive. Most of the time, though, he was able to define the arena of his competitiveness so idiosyncratically that there really weren’t other contenders. And I think in the end Ed felt pretty good about all the things he’d managed to do in his life. It was fitting that he owned an actual island. Because somehow an island was a metaphor for Ed’s life: separate, independent and unique.

\n

Thanks

\n

I’ve had help with information for this piece from many people, including Joyce Fredkin, Rick Fredkin, Simson Garfinkel, Andrea Gerlach, Bill Gosper, Howard Gutowitz, Steven Levy, Norm Margolus, Margaret Minsky, Dave Moon, John Moussouris, Mark Nahabedian, Walter Parkes, David Reiss, Brian Silverman, George Sulak, Larry Sulak and Matthew Szudzik. (Tom Toffoli agreed to talk, but didn’t show up.) I thank the Department of Distinctive Collections at the MIT Library for access to the Fredkin papers archive there. Thanks also to Brad Klee and Nik Murzin for technical help.

\n

Appendix: Analyzing the Not-Quite-Circle

\n

Here’s what the SALT cellular automaton does for two sizes of initial “string”:

\n
\n
\n

\n

For an initial string of length n (with n > 2), the overall period is 54n – 43, and the envelope “woven” going through all configurations is:

\n
\n
\n

\n

The “circle” is obtained by averaging the positions of all cells present at a given time step. The “circle” is always planar, but its effective radius varies with direction (i.e. as the system steps through each cycle):

\n
\n
\n

\n

Ed and Dan Miller looked at the standard deviation of the effective radius as a function of n, computing it up to n = 20, and getting the following results:

\n

Click to enlarge

\n

It looked as if the standard deviation was just going to go smoothly to zero—so that for an infinite string one would get a perfect circle. But that turns out not to be true, as one can see by extending the computation to slightly larger values of n:

\n
\n
\n

\n

And actually there’s a minimum at n = 43, with standard deviation 0.0012 (and fractional size discrepancy 0.0048)—and it doesn’t look like even for n ∞ one will get a perfect circle.

\n

But how can one work out the n ∞ case? It’s actually a nice application for calculus.

\n

First, notice that the “basket” consists of a series of layers of a cube viewed from one of its corners, or in other words a sequence of shapes like this:

\n
\n
\n

\n

Here’s how these are formed as one sweeps through the cube:

\n
\n
\n

\n

One can think of the string in the cellular automaton as spanning these “layers”, and successively moving around all of them as the cellular automaton evolves. In the continuum limit, there’s effectively a parameter t that defines where on each “layer curve” one is at a particular time. Conveniently enough, the length of all the layer curves is the same (for a unit cube it is 3 ≈ 4.2). With successive layers parametrized by a variable s (running from 0 to 1) the corners of the layer curves (all normalized to have length 1) are given by:

\n
\n
\n

\n

Now we need to find the actual x, y positions of string elements (AKA infinitesimal cells) as a function of s and t. Since the edges of the layer polygons are always straight, in each of a series of “piecewise regions” in s and t (with breakpoints defined by the corners of the polygons), we get expressions for x and y that are linear in s and t:

\n
\n
\n

\n

One subtlety is that the string in essence turns as time progresses, so that it effectively samples a different t value for different layers s. To correct for this, we have to find for which t we get x = 0 for a given s. It’s convenient to put the center of all our layer curves at {0, 0}, and we can do this now by subtracting . Then the (first) value of t at which x = 0 is given simply by:

\n
\n
\n

\n

The parametric surface we now get as a function of t is (with discrete lines indicating particular values of s):

\n
\n
\n

\n

Now we can slice the parametric surface not in discrete s values but instead in discrete t values—thus getting what’s basically a sequence of effective strings at discrete times:

\n
\n
\n

\n

The centroids of the strings are indicated in green, and these are then points on our potential circle. Using what we did above, the radius of this “circle” as a function of t can then be found by integrating over s. The result is algebraically complicated, but has a closed form:

\n
\n
\n

\n

Integrating this over t we get the “average radius”, normalized to “circumference 1” from the fact that t varies from 0 to 1 going “around the circle”:

\n
\n
\n

\n

(This means that the “effective π” for this circle is about 3.437.)

\n

Now we can plot the “wiggle” of the radius as a function of “angle” (i.e. t):

\n
\n
\n

\n

It looks a bit like a sine curve, but it’s not one. And, for example, it isn’t even symmetrical. Its maxima (which occur at odd multiples of 30°) are

\n
\n
\n

\n

while its minima (at even multiples of 30°) are

\n
\n
\n

\n

and dividing by the average radius these are about 1.00734 and 0.992175.

\n

The ratio of maximum to minimum (effectively “wiggle amplitude”) is:

\n
\n
\n

\n

Meanwhile, the standard deviation can be obtained as an integral over t, and the final result is

\n
\n
\n

\n

which is about 2.4 times larger than what we get at n = 100. We can see the approach to the asymptotic value by computing integrals over t for progressively larger numbers of discrete values of s (which, we should emphasize, is similar to values of n, but not quite the same, particularly for small n):

\n
\n
\n

\n", - "category": "Historical Perspectives", - "link": "https://writings.stephenwolfram.com/2023/08/remembering-the-improbable-life-of-ed-fredkin-1934-2023-and-his-world-of-ideas-and-stories/", - "creator": "Mark Long", - "pubDate": "Tue, 22 Aug 2023 20:03:54 +0000", - "enclosure": "https://content.wolfram.com/sites/43/2023/08/PERQ-1.mov", - "enclosureType": "video/quicktime", - "image": "https://content.wolfram.com/sites/43/2023/08/PERQ-1.mov", - "id": "", - "language": "en", - "folder": "", - "feed": "wolfram", - "read": false, - "favorite": false, - "created": false, - "tags": [], - "hash": "427cfd5d97716dee9a34369b72a7b686", - "highlights": [] - }, { "title": "Generative AI Space and the Mental Imagery of Alien Minds", "description": "\"\"Click on any image in this post to copy the code that produced it and generate the output on your own computer in a Wolfram notebook. AIs and Alien Minds How do alien minds perceive the world? It’s an old and oft-debated question in philosophy. And it now turns out to also be a question […]", @@ -241,28 +417,6 @@ "hash": "f51ac7a3875bd5279a95ee7f047858c0", "highlights": [] }, - { - "title": "LLM Tech and a Lot More: Version 13.3 of Wolfram Language and Mathematica", - "description": "\"\"The Leading Edge of 2023 Technology … and Beyond Today we’re launching Version 13.3 of Wolfram Language and Mathematica—both available immediately on desktop and cloud. It’s only been 196 days since we released Version 13.2, but there’s a lot that’s new, not least a whole subsystem around LLMs. Last Friday (June 23) we celebrated 35 […]", - "content": "\"\"

\"LLM

\n

The Leading Edge of 2023 Technology … and Beyond

\n

Today we’re launching Version 13.3 of Wolfram Language and Mathematica—both available immediately on desktop and cloud. It’s only been 196 days since we released Version 13.2, but there’s a lot that’s new, not least a whole subsystem around LLMs.

\n

Last Friday (June 23) we celebrated 35 years since Version 1.0 of Mathematica (and what’s now Wolfram Language). And to me it’s incredible how far we’ve come in these 35 years—yet how consistent we’ve been in our mission and goals, and how well we’ve been able to just keep building on the foundations we created all those years ago.

\n

And when it comes to what’s now Wolfram Language, there’s a wonderful timelessness to it. We’ve worked very hard to make its design as clean and coherent as possible—and to make it a timeless way to elegantly represent computation and everything that can be described through it.

\n

Last Friday I fired up Version 1 on an old Mac SE/30 computer (with 2.5 megabytes of memory), and it was a thrill see functions like Plot and NestList work just as they would today—albeit a lot slower. And it was wonderful to be able to take (on a floppy disk) the notebook I created with Version 1 and have it immediately come to life on a modern computer.

\n

But even as we’ve maintained compatibility over all these years, the scope of our system has grown out of all recognition—with everything in Version 1 now occupying but a small sliver of the whole range of functionality of the modern Wolfram Language:

\n

Versions 1.0 and 13.3 of Wolfram Language compared

\n

So much about Mathematica was ahead of its time in 1988, and perhaps even more about Mathematica and the Wolfram Language is ahead of its time today, 35 years later. From the whole idea of symbolic programming, to the concept of notebooks, the universal applicability of symbolic expressions, the notion of computational knowledge, and concepts like instant APIs and so much more, we’ve been energetically continuing to push the frontier over all these years.

\n

Our long-term objective has been to build a full-scale computational language that can represent everything computationally, in a way that’s effective for both computers and humans. And now—in 2023—there’s a new significance to this. Because with the advent of LLMs our language has become a unique bridge between humans, AIs and computation.

\n

The attributes that make Wolfram Language easy for humans to write, yet rich in expressive power, also make it ideal for LLMs to write. And—unlike traditional programming languages— Wolfram Language is intended not only for humans to write, but also to read and think in. So it becomes the medium through which humans can confirm or correct what LLMs do, to deliver computational language code that can be confidently assembled into a larger system.

\n

The Wolfram Language wasn’t originally designed with the recent success of LLMs in mind. But I think it’s a tribute to the strength of its design that it now fits so well with LLMs—with so much synergy. The Wolfram Language is important to LLMs—in providing a way to access computation and computational knowledge from within the LLM. But LLMs are also important to Wolfram Language—in providing a rich linguistic interface to the language.

\n

We’ve always built—and deployed—Wolfram Language so it can be accessible to as many people as possible. But the advent of LLMs—and our new Chat Notebooks—opens up Wolfram Language to vastly more people. Wolfram|Alpha lets anyone use natural language—without prior knowledge—to get questions answered. Now with LLMs it’s possible to use natural language to start defining potential elaborate computations.

\n

As soon as you’ve formulated your thoughts in computational terms, you can immediately “explain them to an LLM”, and have it produce precise Wolfram Language code. Often when you look at that code you’ll realize you didn’t explain yourself quite right, and either the LLM or you can tighten up your code. But anyone—without any prior knowledge—can now get started producing serious Wolfram Language code. And that’s very important in seeing Wolfram Language realize its potential to drive “computational X” for the widest possible range of fields X.

\n

But while LLMs are “the biggest single story” in Version 13.3, there’s a lot else in Version 13.3 too—delivering the latest from our long-term research and development pipeline. So, yes, in Version 13.3 there’s new functionality not only in LLMs but also in many “classic” areas—as well as in new areas having nothing to do with LLMs.

\n

Across the 35 years since Version 1 we’ve been able to continue accelerating our research and development process, year by year building on the functionality and automation we’ve created. And we’ve also continually honed our actual process of research and development—for the past 5 years sharing our design meetings on open livestreams.

\n

Version 13.3 is—from its name—an “incremental release”. But—particularly with its new LLM functionality—it continues our tradition of delivering a long list of important advances and updates, even in incremental releases.

\n

LLM Tech Comes to Wolfram Language

\n

LLMs make possible many important new things in the Wolfram Language. And since I’ve been discussing these in a series of recent posts, I’ll just give only a fairly short summary here. More details are in the other posts, both ones that have appeared, and ones that will appear soon.

\n
\n

To ensure you have the latest Chat Notebook functionality installed and available, use:

\n
PacletInstall[\"Wolfram/Chatbook\" \"\" \"1.0.0\", UpdatePacletSites \"\" True].
\n

\n

\n

The most immediately visible LLM tech in Version 13.3 is Chat Notebooks. Go to File > New > Chat-Enabled Notebook and you’ll get a Chat Notebook that supports “chat cells” that let you “talk to” an LLM. Press ' (quote) to get a new chat cell:

\n

Plot two sine curves

\n

You might not like some details of what got done (do you really want those boldface labels?) but I consider this pretty impressive. And it’s a great example of using an LLM as a “linguistic interface” with common sense, that can generate precise computational language, which can then be run to get a result.

\n

This is all very new technology, so we don’t yet know what patterns of usage will work best. But I think it’s going to go like this. First, you have to think computationally about whatever you’re trying to do. Then you tell it to the LLM, and it’ll produce Wolfram Language code that represents what it thinks you want to do. You might just run that code (or the Chat Notebook will do it for you), and see if it produces what you want. Or you might read the code, and see if it’s what you want. But either way, you’ll be using computational language—Wolfram Language—as the medium to formalize and express what you’re trying to do.

\n

When you’re doing something you’re familiar with, it’ll almost always be faster and better to think directly in Wolfram Language, and just enter the computational language code you want. But if you’re exploring something new, or just getting started on something, the LLM is likely to be a really valuable way to “get you to first code”, and to start the process of crispening up what you want in computational terms.

\n

If the LLM doesn’t do exactly what you want, then you can tell it what it did wrong, and it’ll try to correct it—though sometimes you can end up doing a lot of explaining and having quite a long dialog (and, yes, it’s often vastly easier just to type Wolfram Language code yourself):

\n

Draw red and green semicircles

\n

Redraw red and green semicircles

\n

Sometimes the LLM will notice for itself that something went wrong, and try changing its code, and rerunning it:

\n

Make table of primes

\n

And even if it didn’t write a piece of code itself, it’s pretty good at piping up to explain what’s going on when an error is generated:

\n

Error report

\n

And actually it’s got a big advantage here, because “under the hood” it can look at lots of details (like stack trace, error documentation, etc.) that humans usually don’t bother with.

\n

To support all this interaction with LLMs, there’s all kinds of new structure in the Wolfram Language. In Chat Notebooks there are chat cells, and there are chatblocks (indicated by gray bars, and generating with ~) that delimit the range of chat cells that will be fed to the LLM when you press shiftenter on a new chat cell. And, by the way, the whole mechanism of cells, cell groups, etc. that we invented 36 years ago now turns out to be extremely powerful as a foundation for Chat Notebooks.

\n

One can think of the LLM as a kind of “alternate evaluator” in the notebook. And there are various ways to set up and control it. The most immediate is in the menu associated with every chat cell and every chatblock (and also available in the notebook toolbar):

\n

Chat cell and chatblock menu

\n

The first items here let you define the “persona” for the LLM. Is it going to act as a Code Assistant that writes code and comments on it? Or is it just going to be a Code Writer, that writes code without being wordy about it? Then there are some “fun” personas—like Wolfie and Birdnardo—that respond “with an attitude”. The Advanced Settings let you do things like set the underlying LLM model you want to use—and also what tools (like Wolfram Language code evaluation) you want to connect to it.

\n

Ultimately personas are mostly just special prompts for the LLM (together, sometimes with tools, etc.) And one of the new things we’ve recently launched to support LLMs is the Wolfram Prompt Repository:

\n

Wolfram Prompt Repository

\n

The Prompt Repository contains several kinds of prompts. The first are personas, which are used to “style” and otherwise inform chat interactions. But then there are two other types of prompts: function prompts, and modifier prompts.

\n

Function prompts are for getting the LLM to do something specific, like summarize a piece of text, or suggest a joke (it’s not terribly good at that). Modifier prompts are for determining how the LLM should modify its output, for example translating into a different human language, or keeping it to a certain length.

\n

You can pull in function prompts from the repository into a Chat Notebook by using !, and modifier prompts using #. There’s also a ^ notation for saying that you want the “input” to the function prompt to be the cell above:

\n

ScientificJargonize

\n

This is how you can access LLM functionality from within a Chat Notebook. But there’s also a whole symbolic programmatic way to access LLMs that we’ve added to the Wolfram Language. Central to this is LLMFunction, which acts very much like a Wolfram Language pure function, except that it gets “evaluated” not by the Wolfram Language kernel, but by an LLM:

\n
\n
\n

\n

You can access a function prompt from the Prompt Repository using LLMResourceFunction:

\n
\n
\n

\n

There’s also a symbolic representation for chats. Here’s an empty chat:

\n
\n
\n

\n

And here now we “say something”, and the LLM responds:

\n
\n
\n

\n

There’s lots of depth to both Chat Notebooks and LLM functionsas I’ve described elsewhere. There’s LLMExampleFunction for getting an LLM to follow examples you give. There’s LLMTool for giving an LLM a way to call functions in the Wolfram Language as “tools”. And there’s LLMSynthesize which provides raw access to the LLM as its text completion and other capabilities. (And controlling all of this is $LLMEvaluator which defines the default LLM configuration to use, as specified by an LLMConfiguration object.)

\n

I consider it rather impressive that we’ve been able to get to the level of support for LLMs that we have in Version 13.3 in less than six months (along with building things like the Wolfram Plugin for ChatGPT, and the Wolfram ChatGPT Plugin Kit). But there’s going to be more to come, with LLM functionality increasingly integrated into Wolfram Language and Notebooks, and, yes, Wolfram Language functionality increasingly integrated as a tool into LLMs.

\n

Line, Surface and Contour Integration

\n

“Find the integral of the function ___” is a typical core thing one wants to do in calculus. And in Mathematica and the Wolfram Language that’s achieved with Integrate. But particularly in applications of calculus, it’s common to want to ask slightly more elaborate questions, like “What’s the integral of ___ over the region ___?”, or “What’s the integral of ___ along the line ___?”

\n

Almost a decade ago (in Version 10) we introduced a way to specify integration over regions—just by giving the region “geometrically” as the domain of the integral:

\n
\n
\n

\n

It had always been possible to write out such an integral in “standard Integrate” form

\n
\n
\n

\n

but the region specification is much more convenient—as well as being much more efficient to process.

\n

Finding an integral along a line is also something that can ultimately be done in “standard Integrate” form. And if you have an explicit (parametric) formula for the line this is typically fairly straightforward. But if the line is specified in a geometrical way then there’s real work to do to even set up the problem in “standard Integrate” form. So in Version 13.3 we’re introducing the function LineIntegrate to automate this.

\n

LineIntegrate can deal with integrating both scalar and vector functions over lines. Here’s an example where the line is just a straight line:

\n
\n
\n

\n

But LineIntegrate also works for lines that aren’t straight, like this parametrically specified one:

\n
\n
\n

\n

To compute the integral also requires finding the tangent vector at every point on the curve—but LineIntegrate automatically does that:

\n
\n
\n

\n

Line integrals are common in applications of calculus to physics. But perhaps even more common are surface integrals, representing for example total flux through a surface. And in Version 13.3 we’re introducing SurfaceIntegrate. Here’s a fairly straightforward integral of flux that goes radially outward through a sphere:

\n
\n
\n

\n

Here’s a more complicated case:

\n
\n
\n

\n
\n
\n

\n

And here’s what the actual vector field looks like on the surface of the dodecahedron:

\n
\n
\n

\n

LineIntegrate and SurfaceIntegrate deal with integrating scalar and vector functions in Euclidean space. But in Version 13.3 we’re also handling another kind of integration: contour integration in the complex plane.

\n

We can start with a classic contour integral—illustrating Cauchy’s theorem:

\n
\n
\n

\n

Here’s a slightly more elaborate complex function

\n
\n
\n

\n

and here’s its integral around a circular contour:

\n
\n
\n

\n

Needless to say, this still gives the same result, since the new contour still encloses the same poles:

\n
\n
\n

\n

More impressively, here’s the result for an arbitrary radius of contour:

\n
\n
\n

\n

And here’s a plot of the (imaginary part of the) result:

\n
\n
\n

\n

Contours can be of any shape:

\n
\n
\n

\n

The result for the contour integral depends on whether the pole is inside the “Pac-Man”:

\n
\n
\n

\n

Another Milestone for Special Functions

\n

One can think of special functions as a way of “modularizing” mathematical results. It’s often a challenge to know that something can be expressed in terms of special functions. But once one’s done this, one can immediately apply the independent knowledge that exists about the special functions.

\n

Even in Version 1.0 we already supported many special functions. And over the years we’ve added support for many more—to the point where we now cover everything that might reasonably be considered a “classical” special function. But in recent years we’ve also been tackling more general special functions. They’re mathematically more complex, but each one we successfully cover makes a new collection of problems accessible to exact solution and reliable numerical and symbolic computation.

\n

Most of the “classic” special functions—like Bessel functions, Legendre functions, elliptic integrals, etc.—are in the end univariate hypergeometric functions. But one important frontier in “general special functions” are those corresponding to bivariate hypergeometric functions. And already in Version 4.0 (1999) we introduced one example of such as a function: AppellF1. And, yes, it’s taken a while, but now in Version 13.3 we’ve finally finished doing the math and creating the algorithms to introduce AppellF2, AppellF3 and AppellF4.

\n

On the face of it, it’s just another function—with lots of arguments—whose value we can find to any precision:

\n
\n
\n

\n

Occasionally it has a closed form:

\n
\n
\n

\n

But despite its mathematical sophistication, plots of it tend to look fairly uninspiring:

\n
\n
\n

\n

Series expansions begin to show a little more:

\n
\n
\n

\n

And ultimately this is a function that solves a pair of PDEs that can be seen as a generalization to two variables of the univariate hypergeometric ODE. So what other generalizations are possible? Paul Appell spent many years around the turn of the twentieth century looking—and came up with just four, which as of Version 13.3 now all appear in the Wolfram Language, as AppellF1, AppellF2, AppellF3 and AppellF4.

\n

To make special functions useful in the Wolfram Language they need to be “knitted” into other capabilities of the language—from numerical evaluation to series expansion, calculus, equation solving, and integral transforms. And in Version 13.3 we’ve passed another special function milestone, around integral transforms.

\n

When I started using special functions in the 1970s the main source of information about them tended to be a small number of handbooks that had been assembled through decades of work. When we began to build Mathematica and what’s now the Wolfram Language, one of our goals was to subsume the information in such handbooks. And over the years that’s exactly what we’ve achieved—for integrals, sums, differential equations, etc. But one of the holdouts has been integral transforms for special functions. And, yes, we’ve covered a great many of these. But there are exotic examples that can often only “coincidentally” be done in closed form—and that in the past have only been found in books of tables.

\n

But now in Version 13.3 we can do cases like:

\n
\n
\n

\n

And in fact we believe that in Version 13.3 we’ve reached the edge of what’s ever been figured out about Laplace transforms for special functions. The most extensive handbook—finally published in 1973—runs to about 400 pages. A few years ago we could do about 55% of the forward Laplace transforms in the book, and 31% of the inverse ones. But now in Version 13.3 we can do 100% of the ones that we can verify as correct (and, yes, there are definitely some mistakes in the book). It’s the end of a long journey, and a satisfying achievement in the quest to make as much mathematical knowledge as possible automatically computable.

\n

Finite Fields!

\n

Ever since Version 1.0 we’ve been able to do things like factoring polynomials modulo primes. And many packages have been developed that handle specific aspects of finite fields. But in Version 13.3 we now have complete, consistent coverage of all finite fields—and operations with them.

\n

Here’s our symbolic representation of the field of integers modulo 5 (AKA ℤ5 or GF(5)):

\n
\n
\n

\n

And here are symbolic representations of the elements of this field—which in this particular case can be rather trivially identified with ordinary integers mod 5:

\n
\n
\n

\n

Arithmetic immediately works on these symbolic elements:

\n
\n
\n

\n

But where things get a bit trickier is when we’re dealing with prime-power fields. We represent the field GF(23) symbolically as:

\n
\n
\n

\n

But now the elements of this field no longer have a direct correspondence with ordinary integers. We can still assign “indices” to them, though (with elements 0 and 1 being the additive and multiplicative identities). So here’s an example of an operation in this field:

\n
\n
\n

\n

But what actually is this result? Well, it’s an element of the finite field—with index 4—represented internally in the form:

\n
\n
\n

\n

The little box opens out to show the symbolic FiniteField construct:

\n

FormField construct

\n

And we can extract properties of the element, like its index:

\n
\n
\n

\n

So here, for example, are the complete addition and multiplication tables for this field:

\n
\n
\n

\n

For the field GF(72) these look a little more complicated:

\n
\n
\n

\n

There are various number-theoretic-like functions that one can compute for elements of finite fields. Here’s an element of GF(510):

\n
\n
\n

\n

The multiplicative order of this (i.e. power of it that gives 1) is quite large:

\n
\n
\n

\n

Here’s its minimal polynomial:

\n
\n
\n

\n

But where finite fields really begin to come into their own is when one looks at polynomials over them. Here, for example, is factoring over GF(32):

\n
\n
\n

\n

Expanding this gives a finite-field-style representation of the original polynomial:

\n
\n
\n

\n

Here’s the result of expanding a power of a polynomial over GF(32):

\n
\n
\n

\n

More, Stronger Computational Geometry

\n

We originally introduced computational geometry in a serious way into the Wolfram Language a decade ago. And ever since then we’ve been building more and more capabilities in computational geometry.

\n

We’ve had RegionDistance for computing the distance from a point to a region for a decade. In Version 13.3 we’ve now extended RegionDistance so it can also compute the shortest distance between two regions:

\n
\n
\n

\n

We’ve also introduced RegionFarthestDistance which computes the furthest distance between any two points in two given regions:

\n
\n
\n

\n

Another new function in Version 13.3 is RegionHausdorffDistance which computes the largest of all shortest distances between points in two regions; in this case it gives a closed form:

\n
\n
\n

\n
\n
\n

\n

Another pair of new functions in Version 13.3 are InscribedBall and CircumscribedBall—which give (n-dimensional) spheres that, respectively, just fit inside and outside regions you give:

\n
\n
\n

\n

In the past several versions, we’ve added functionality that combines geo computation with computational geometry. Version 13.3 has the beginning of another initiative—introducing abstract spherical geometry:

\n
\n
\n

\n

This works for spheres in any number of dimensions:

\n
\n
\n

\n

In addition to adding functionality, Version 13.3 also brings significant speed enhancements (often 10x or more) to some core operations in 2D computational geometry—making things like computing this fast even though it involves complicated regions:

\n
\n
\n

\n
\n
\n

\n

Visualizations Begin to Come Alive

\n

A great long-term strength of the Wolfram Language has been its ability to produce insightful visualizations in a highly automated way. In Version 13.3 we’re taking this further, by adding automatic “live highlighting”. Here’s a simple example, just using the function Plot. Instead of just producing static curves, Plot now automatically generates a visualization with interactive highlighting:

\n
\n
\n

\n

The same thing works for ListPlot:

\n
\n
\n

\n

The highlighting can, for example, show dates too:

\n
\n
\n

\n

There are many choices for how the highlighting should be done. The simplest thing is just to specify a style in which to highlight whole curves:

\n
\n
\n

\n

But there are many other built-in highlighting specifications. Here, for example, is \"XSlice\":

\n
\n
\n

\n

In the end, though, highlighting is built up from a whole collection of components—like \"NearestPoint\", \"Crosshairs\", \"XDropline\", etc.—that you can assemble and style for yourself:

\n
\n
\n

\n

The option PlotHighlighting defines global highlighting in a plot. But by using the Highlighted “wrapper” you can specify that only a particular element in the plot should be highlighted:

\n
\n
\n

\n

For interactive and exploratory purposes, the kind of automatic highlighting we’ve just been showing is very convenient. But if you’re making a static presentation, you’ll need to “burn in” particular pieces of highlighting—which you can do with Placed:

\n
\n
\n

\n

\n

In indicating elements in a graphic there are different effects one can use. In Version 13.1 we introduced DropShadowing[]. In Version 13.3 we’re introducing Haloing:

\n
\n
\n

\n

Haloing can also be combined with interactive highlighting:

\n
\n
\n

\n

By the way, there are lots of nice effects you can get with Haloing in graphics. Here’s a geo example—including some parameters for the “orientation” and “thickness” of the haloing:

\n
\n
\n

\n

Publishing to Augmented + Virtual Reality

\n

Throughout the history of the Wolfram Language 3D visualization has been an important capability. And we’re always looking for ways to share and communicate 3D geometry. Already back in the early 1990s we had experimental implementations of VR. But at the time there wasn’t anything like the kind of infrastructure for VR that would be needed to make this broadly useful. In the mid-2010s we then introduced VR functionality based on Unity—that provides powerful capabilities within the Unity ecosystem, but is not accessible outside.

\n

Today, however, it seems there are finally broad standards emerging for AR and VR. And so in Version 13.3 we’re able to begin delivering what we hope will provide widely accessible AR and VR deployment from the Wolfram Language.

\n

At a underlying level what we’re doing is to support the USD and GLTF geometry representation formats. But we’re also building a higher-level interface that allows anyone to “publish” 3D geometry for AR and VR.

\n

Given a piece of geometry (which for now can’t involve too many polygons), all you do is apply ARPublish:

\n
\n
\n

\n

The result is a cloud object that has a certain underlying UUID, but is displayed in a notebook as a QR code. Now all you do is look at this QR code with your phone (or tablet, etc.) camera, and press the URL it extracts.

\n

The result will be that the geometry you published with ARPublish now appears in AR on your phone:

\n

Augmented reality triptych

\n

Move your phone and you’ll see that your geometry has been realistically placed into the scene. You can also go to a VR “object” mode in which you can manipulate the geometry on your phone.

\n

“Under the hood” there are some slightly elaborate things going on—particularly in providing the appropriate data to different kinds of phones. But the result is a first step in the process of easily being able to get AR and VR output from the Wolfram Language—deployed in whatever devices support AR and VR.

\n

Getting the Details Right: The Continuing Story

\n

In every version of Wolfram Language we add all sorts of fundamentally new capabilities. But we also work to fill in details of existing capabilities, continually pushing to make them as general, consistent and accurate as possible. In Version 13.3 there are many details that have been “made right”, in many different areas.

\n

Here’s one example: the comparison (and sorting) of Around objects. Here are 10 random “numbers with uncertainty”:

\n
\n
\n

\n

These sort by their central value:

\n
\n
\n

\n

But if we look at these, many of their uncertainty regions overlap:

\n
\n
\n

\n

So when should we consider a particular number-with-uncertainty “greater than” another? In Version 13.3 we carefully take into account uncertainty when making comparisons. So, for example, this gives True:

\n
\n
\n

\n

But when there’s too big an uncertainty in the values, we no longer consider the ordering “certain enough”:

\n
\n
\n

\n

Here’s another example of consistency: the applicability of Duration. We introduced Duration to apply to explicit time constructs, things like Audio objects, etc. But in Version 13.3 it also applies to entities for which there’s a reasonable way to define a “duration”:

\n
\n
\n

\n
\n
\n

\n

Dates (and times) are complicated things—and we’ve put a lot of effort into handling them correctly and consistently in the Wolfram Language. One concept that we introduced a few years ago is date granularity: the (subtle) analog of numerical precision for dates. But at first only some date functions supported granularity; now in Version 13.3 all date functions include a DateGranularity option—so that granularity can consistently be tracked through all date-related operations:

\n
\n
\n

\n

Also in dates, something that’s been added, particularly for astronomy, is the ability to deal with “years” specified by real numbers:

\n
\n
\n

\n

And one consequence of this is that it becomes easier to make a plot of something like astronomical distance as a function of time:

\n
\n
\n

\n

Also in astronomy, we’ve been steadily extending our capabilities to consistently fill in computations for more situations. In Version 13.3, for example, we can now compute sunrise, etc. not just from points on Earth, but from points anywhere in the solar system:

\n
\n
\n

\n

By the way, we’ve also made the computation of sunrise more precise. So now if you ask for the position of the Sun right at sunrise you’ll get a result like this:

\n
\n
\n

\n

How come the altitude of the Sun is not zero at sunrise? That’s because the disk of the Sun is of nonzero size, and “sunrise” is defined to be when any part of the Sun pokes over the horizon.

\n

Even Easier to Type: Affordances for Wolfram Language Input

\n

Back in 1988 when what’s now Wolfram Language first existed, the only way to type it was like ordinary text. But gradually we’ve introduced more and more “affordances” to make it easier and faster to type correct Wolfram Language input. In 1996, with Version 3, we introduced automatic spacing (and spanning) for operators, as well as brackets that flashed when they matched—and things like -> being automatically replaced by . Then in 2007, with Version 6, we introduced—with some trepidation at first—syntax coloring. We’d had a way to request autocompletion of a symbol name all the way back to the beginning, but it’d never been good or efficient enough for us to make it happen all the time as you type. But in 2012, for Version 9, we created a much more elaborate autocomplete system—that was useful and efficient enough that we turned it on for all notebook input. A key feature of this autocomplete system was its context-sensitive knowledge of the Wolfram Language, and how and where different symbols and strings typically appear. Over the past decade, we’ve gradually refined this system to the point where I, for one, deeply rely on it.

\n

In recent versions, we’ve made other “typability” improvements. For example, in Version 12.3, we generalized the -> to transformation to a whole collection of “auto operator renderings”. Then in Version 13.0 we introduced “automatching” of brackets, in which, for example, if you enter [ at the end of what you’re typing, you’ll automatically get a matching ].

\n

Making “typing affordances” work smoothly is a painstaking and tricky business. But in every recent version we’ve steadily been adding more features that—in very “natural” ways—make it easier and faster to type Wolfram Language input.

\n

In Version 13.3 one major change is an enhancement to autocompletion. Instead of just showing pure completions in which characters are appended to what’s already been typed, the autocompletion menu now includes “fuzzy completions” that fill in intermediate characters, change capitalization, etc.

\n

So, for example, if you type “lp” you now get ListPlot as a completion (the little underlines indicate where the letters you actually type appear):

\n

ListPlot autocompletion menu

\n

From a design point of view one thing that’s important about this is that it further removes the “short name” premium—and weights things even further on the side of wanting names that explain themselves when they’re read, rather than that are easy to type in an unassisted way. With the Wolfram Function Repository it’s become increasingly common to want to type ResourceFunction. And we’d been thinking that perhaps we should have a special, short notation for that. But with the new autocompletion, one can operationally just press three keys—rfenterto get to ResourceFunction:

\n

ResourceFunction autocompletion menu

\n

When one designs something and gets the design right, people usually don’t notice; things just “work as they expect”. But when there’s a design error, that’s when people notice—and are frustrated by—the design. But then there’s another case: a situation where, for example, there are two things that could happen, and sometimes one wants one, and sometimes the other. In doing the design, one has to pick a particular branch. And when this happens to be the branch people want, they don’t notice, and they’re happy. But if they want the other branch, it can be confusing and frustrating.

\n

In the design of the Wolfram Language one of the things that has to be chosen is the precedence for every operator: a + b × c means a + (b × c) because × has higher precedence than +. Often the correct order of precedences is fairly obvious. But sometimes it’s simply impossible to make everyone happy all the time. And so it is with and &. It’s very convenient to be able to add & at the end of something you type, and make it into a pure function. But that means if you type a b & it’ll turn the whole thing into a function: a b &. When functions have options, however, one often wants things like name function. The natural tendency is to type this as name body &. But this will mean (name body) & rather than name (body &). And, yes, when you try to run the function, it’ll notice it doesn’t have correct arguments and options specified. But you’d like to know that what you’re typing isn’t right as soon as you type it. And now in Version 13.3 we have a mechanism for that. As soon as you enter & to “end a function”, you’ll see the extent of the function flash:

\n
\n
\n

\n

And, yup, you can see that’s wrong. Which gives you the chance to fix it as:

\n
\n
\n

\n

There’s another notebook-related update in Version 13.3 that isn’t directly related to typing, but will help in the construction of easy-to-navigate user interfaces. We’ve had ActionMenu since 2007—but it’s only been able to create one-level menus. In Version 13.3 it’s been extended to arbitrary hierarchical menus:

\n
\n
\n

\n

Again not directly related to typing, but now relevant to managing and editing code, there’s an update in Version 13.3 to package editing in the notebook interface. Bring up a .wl file and it’ll appear as a notebook. But its default toolbar is different from the usual notebook toolbar (and is newly designed in Version 13.3):

\n

New default toolbar

\n

Go To now gives you a way to immediately go to the definition of any function whose name matches what you type, as well as any section, etc.:

\n

Go To results

\n

The numbers on the right here are code line numbers; you can also go directly to a specific line number by typing :nnn.

\n

The Elegant Code Project

\n

One of the central goals—and achievements—of the Wolfram Language is to create a computational language that can be used not only as a way to tell computers what to do, but also as a way to communicate computational ideas for human consumption. In other words, Wolfram Language is intended not only to be written by humans (for consumption by computers), but also to be read by humans.

\n

Crucial to this is the broad consistency of the Wolfram Language, as well as its use of carefully chosen natural-language-based names for functions, etc. But what can we do to make Wolfram Language as easy and pleasant as possible to read? In the past we’ve balanced our optimization of the appearance of Wolfram Language between reading and writing. But in Version 13.3 we’ve got the beginnings of our Elegant Code project—to find ways to render Wolfram Language to be specifically optimized for reading.

\n

As an example, here’s a small piece of code (from my An Elementary Introduction to the Wolfram Language), shown in the default way it’s rendered in notebooks:

\n
\n
\n

\n

But in Version 13.3 you can use Format > Screen Environment > Elegant to set a notebook to use the current version of “elegant code”:

\n
\n
\n

\n

(And, yes, this is what we’re actually using for code in this post, as well as some other recent ones.) So what’s the difference? First of all, we’re using a proportionally spaced font that makes the names (here of symbols) easy to “read like words”. And second, we’re adding space between these “words”, and graying back “structural elements” like and . When you write a piece of code, things like these structural elements need to stand out enough for you to “see they’re right”. But when you’re reading code, you don’t need to pay as much attention to them. Because the Wolfram Language is so based on “word-like” names, you can typically “understand what it’s saying” just by “reading these words”.

\n

Of course, making code “elegant” is not just a question of formatting; it’s also a question of what’s actually in the code. And, yes, as with writing text, it takes effort to craft code that “expresses itself elegantly”. But the good news is that the Wolfram Language—through its uniquely broad and high-level character—makes it surprisingly straightforward to create code that expresses itself extremely elegantly.

\n

But the point now is to make that code not only elegant in content, but also elegant in formatting. In technical documents it’s common to see math that’s at least formatted elegantly. But when one sees code, more often than not, it looks like something only a machine could appreciate. Of course, if the code is in a traditional programming language, it’ll usually be long and not really intended for human consumption. But what if it’s elegantly crafted Wolfram Language code? Well then we’d like it to look as attractive as text and math. And that’s the point of our Elegant Code project.

\n

There are many tradeoffs, and many issues to be navigated. But in Version 13.3 we’re definitely making progress. Here’s an example that doesn’t have so many “words”, but where the elegant code formatting still makes the “blocking” of the code more obvious:

\n
\n
\n

\n

Here’s a slightly longer piece of code, where again the elegant code formatting helps pull out “readable” words, as well as making the overall structure of the code more obvious:

\n
\n
\n

\n

Particularly in recent years, we’ve added many mechanisms to let one write Wolfram Language that’s easier to read. There are the auto operator renderings, like m[[i]] turning into . And then there are things like the notation for pure functions. One particularly important element is Iconize, which lets you show any piece of Wolfram Language input in a visually “iconized” form—which nevertheless evaluates just like the corresponding underlying expression:

\n
\n
\n

\n

Iconize lets you effectively hide details (like large amounts of data, option settings, etc.) But sometimes you want to highlight things. You can do it with Style, Framed, Highlighted—and in Version 13.3, Squiggled:

\n
\n
\n

\n

By default, all these constructs persist through evaluation. But in Version 13.3 all of them now have the option StripOnInput, and with this set, you have something that shows up highlighted in an input cell, but where the highlighting is stripped when the expression is actually fed to the Wolfram Language kernel.

\n

These show their highlighting in the notebook:

\n
\n
\n

\n

But when used in input, the highlighting is stripped:

\n
\n
\n

\n

See More Also…

\n

A great strength of the Wolfram Language (yes, perhaps initiated by my original 1988 Mathematica Book) is its detailed documentation—which has now proved valuable not only for human users but also for AIs. Plotting the number of words that appear in the documentation in successive versions, we see a strong progressive increase:

\n

Words graph

\n

But with all that documentation, and all those new things to be documented, the problem of appropriately crosslinking everything has increased. Even back in Version 1.0, when the documentation was a physical book, there were “See Also’s” between functions:

\n

Versioni 1.0 documentation

\n

And by now there’s a complicated network of such See Also’s:

\n
\n
\n

\n

But that’s just the network of how functions point to functions. What about other kinds of constructs? Like formats, characters or entity types—or, for that matter, entries in the Wolfram Function Repository, Wolfram Data Repository, etc. Well, in Version 13.3 we’ve done a first iteration of crosslinking all these kinds of things.

\n

So here now are the “See Also” areas for Graph and Molecule:

\n

Graph see also options

\n

Molecule see also options

\n

Not only are there functions here; there are also other kinds of things that a person (or AI) looking at these pages might find relevant.

\n

It’s great to be able to follow links, but sometimes it’s better just to have material immediately accessible, without following a link. Back in Version 1.0 we made the decision that when a function inherits some of its options from a “base function” (say Plot from Graphics), we only need to explicitly list the non-inherited option values. At the time, this was a good way to save a little paper in the printed book. But now the optimization is different, and finally in Version 13.3 we have a way to show “All Options”—tucked away so it doesn’t distract from the typically-more-important non-inherited options.

\n

Here’s the setup for Plot. First, the list of non-inherited option values:

\n

Plot non-inherited option values

\n

Then, at the end of the Details section

\n

Details and options

\n

which opens to:

\n

Expanded list of all options

\n

Pictures from Words: Generative AI for Images

\n

One of the remarkable things that’s emerged as a possibility from recent advances in AI and neural nets is the generation of images from textual descriptions. It’s not yet realistic to do this at all well on anything but a high-end (and typically server) GPU-enabled machine. But in Version 13.3 there’s now a built-in function ImageSynthesize that can get images synthesized, for now through an external API.

\n

You give text, and ImageSynthesize will try to generate images for which that text is a description:

\n
\n
\n

\n

Sometimes these images will be directly useful in their own right, perhaps as “theming images” for documents or user interfaces. Sometimes they will provide raw material that can be developed into icons or other art. And sometimes they are most useful as inputs to tests or other algorithms.

\n

And one of the important things about ImageSynthesize is that it can immediately be used as part of any Wolfram Language workflow. Pick a random sentence from Alice in Wonderland:

\n
\n
\n

\n

Now ImageSynthesize can “illustrate” it:

\n
\n
\n

\n

Or we can get AI to feed AI:

\n
\n
\n

\n
\n
\n

\n

ImageSynthesize is set up to automatically be able to synthesize images of different sizes:

\n
\n
\n

\n

You can take the output of ImageSynthesize and immediately process it:

\n
\n
\n

\n

ImageSynthesize can not only produce complete images, but can also fill in transparent parts of “incomplete” images:

\n
\n
\n

\n

In addition to ImageSynthesize and all its new LLM functionality, Version 13.3 also includes a number of advances in the core machine learning system for Wolfram Language. Probably the most notable are speedups of up to 10x and beyond for neural net training and evaluation on x86-compatible systems, as well as better models for ImageIdentify. There are also a variety of new networks in the Wolfram Neural Net Repository, particularly ones based on transformers.

\n

Digital Twins: Fitting System Models to Data

\n

It’s been five years since we first began to introduce industrial-scale systems engineering capabilities in the Wolfram Language. The goal is to be able to compute with models of engineering and other systems that can be described by (potentially very large) collections of ordinary differential equations and their discrete analogs. Our separate Wolfram System Modeler product provides an IDE and GUI for graphically creating such models.

\n

For the past five years we’ve been able to do high-efficiency simulation of these models from within the Wolfram Language. And over the past few years we’ve been adding all sorts of higher-level functionality for programmatically creating models, and for systematically analyzing their behavior. A major focus in recent versions has been the synthesis of control systems, and various forms of controllers.

\n

Version 13.3 now tackles a different issue, which is the alignment of models with real-world systems. The idea is to have a model which contains certain parameters, and then to determine these parameters by essentially fitting the model’s behavior to observed behavior of a real-world system.

\n

Let’s start by talking about a simple case where our model is just defined by a single ODE:

\n
\n
\n

\n

This ODE is simple enough that we can find its analytical solution:

\n
\n
\n

\n

So now let’s make some “simulated real-world data” assuming a = 2, and with some noise:

\n
\n
\n

\n

Here’s what the data looks like:

\n
\n
\n

\n

Now let’s try to “calibrate” our original model using this data. It’s a process similar to machine learning training. In this case we make an “initial guess” that the parameter a is 1; then when SystemModelCalibrate runs it shows the “loss” decreasing as the correct value of a is found:

\n
\n
\n

\n

The “calibrated” model does indeed have a ≈ 2:

\n
\n
\n

\n

Now we can compare the calibrated model with the data:

\n
\n
\n

\n

As a slightly more realistic engineering-style example let’s look at a model of an electric motor (with both electrical and mechanical parts):

\n
\n
\n

\n

Let’s say we’ve got some data on the behavior of the motor; here we’ve assumed that we’ve measured the angular velocity of a component in the motor as a function of time. Now we can use this data to calibrate parameters of the model (here the resistance of a resistor and the damping constant of a damper):

\n
\n
\n

\n

Here are the fitted parameter values:

\n
\n
\n

\n

And here’s a full plot of the angular velocity data, together with the fitted model and its 95% confidence bands:

\n
\n
\n

\n

SystemModelCalibrate can be used not only in fitting a model to real-world data, but also for example in fitting simpler models to more complicated ones, making possible various forms of “model simplification”.

\n

Symbolic Testing Framework

\n

The Wolfram Language is by many measures one of the world’s most complex pieces of software engineering. And over the decades we’ve developed a large and powerful system for testing and validating it. A decade ago—in Version 10—we began to make some of our internal tools available for anyone writing Wolfram Language code. Now in Version 13.3 we’re introducing a more streamlined—and “symbolic”—version of our testing framework.

\n

The basic idea is that each test is represented by a symbolic TestObject, created using TestCreate:

\n
\n
\n

\n

On its own, TestObject is an inert object. You can run the test it represents using TestEvaluate:

\n
\n
\n

\n

Each test object has a whole collection of properties, some of which only get filled in when the test is run:

\n
\n
\n

\n

It’s very convenient to have symbolic test objects that one can manipulate using standard Wolfram Language functions, say selecting tests with particular features, or generating new tests from old. And when one builds a test suite, one does it just by making a list of test objects.

\n

This makes a list of test objects (and, yes, there’s some trickiness because TestCreate needs to keep unevaluated the expression that’s going to be tested):

\n
\n
\n

\n

But given these tests, we can now generate a report from running them:

\n
\n
\n

\n

TestReport has various options that allow you to monitor and control the running of a test suite. For example, here we’re saying to echo every \"TestEvaluated\" event that occurs:

\n
\n
\n

\n

Did You Get That Math Right?

\n

Most of what the Wolfram Language is about is taking inputs from humans (as well as programs, and now AIs) and computing outputs from them. But a few years ago we started introducing capabilities for having the Wolfram Language ask questions of humans, and then assessing their answers.

\n

In recent versions we’ve been building up sophisticated ways to construct and deploy “quizzes” and other collections of questions. But one of the core issues is always how to determine whether a person has answered a particular question correctly. Sometimes that’s easy to determine. If we ask “What is 2 + 2?”, the answer better be “4” (or conceivably “four”). But what if we ask a question where the answer is some algebraic expression? The issue is that there may be many mathematically equal forms of that expression. And it depends on what exactly one’s asking whether one considers a particular form to be the “right answer” or not.

\n

For example, here we’re computing a derivative:

\n
\n
\n

\n

And here we’re doing a factoring problem:

\n
\n
\n

\n

These two answers are mathematically equal. And they’d both be “reasonable answers” for the derivative if it appeared as a question in a calculus course. But in an algebra course, one wouldn’t want to consider the unfactored form a “correct answer” to the factoring problem, even though it’s “mathematically equal”.

\n

And to deal with these kinds of issues, we’re introducing in Version 13.3 more detailed mathematical assessment functions. With a \"CalculusResult\" assessment function, it’s OK to give the unfactored form:

\n
\n
\n

\n

But with a \"PolynomialResult\" assessment function, the algebraic form of the expression has to be the same for it to be considered “correct”:

\n
\n
\n

\n

There’s also another type of assessment function—\"ArithmeticResult\"—which only allows trivial arithmetic rearrangements, so that it considers 2 + 3 equivalent to 3 + 2, but doesn’t consider 2/3 equivalent to 4/6:

\n
\n
\n

\n

Here’s how you’d build a question with this:

\n
\n
\n

\n

And now if you type “2/3” it’ll say you’ve got it right, but if you type “4/6” it won’t. However, if you use, say, \"CalculusResult\" in the assessment function, it’ll say you got it right even if you type “4/6”.

\n

Streamlining Parallel Computation

\n

Ever since the mid-1990s there’s been the capability to do parallel computation in the Wolfram Language. And certainly for me it’s been critical in a whole range of research projects I’ve done. I currently have 156 cores routinely available in my “home” setup, distributed across 6 machines. It’s sometimes challenging from a system administration point of view to keep all those machines and their networking running as one wants. And one of the things we’ve been doing in recent versions—and now completed in Version 13.3—is to make it easier from within the Wolfram Language to see and manage what’s going on.

\n

It all comes down to specifying the configuration of kernels. And in Version 13.3 that’s now done using symbolic KernelConfiguration objects. Here’s an example of one:

\n
\n
\n

\n

There’s all sorts of information in the kernel configuration object:

\n
\n
\n

\n

It describes “where” a kernel with that configuration will be, how to get to it, and how it should be launched. The kernel might just be local to your machine. Or it might be on a remote machine, accessible through ssh, or https, or our own wstp (Wolfram Symbolic Transport Protocol) or lwg (Lightweight Grid) protocols.

\n

In Version 13.3 there’s now a GUI for setting up kernel configurations:

\n

Kernel configuration editor

\n

The Kernel Configuration Editor lets you enter all the details that are needed, about network connections, authentication, locations of executables, etc.

\n

But once you’ve set up a KernelConfiguration object, that’s all you ever need—for example to say “where” to do a remote evaluation:

\n
\n
\n

\n

ParallelMap and other parallel functions then just work by doing their computations on kernels specified by a list of KernelConfiguration objects. You can set up the list in the Kernels Settings GUI:

\n

Parallel kernels settings

\n

Here’s my personal default collection of parallel kernels:

\n
\n
\n

\n

This now counts the number of individual kernels running on each machine specified by these configurations:

\n
\n
\n

\n

In Version 13.3 a convenient new feature is named collections of kernels. For example, this runs a single “representative” kernel on each distinct machine:

\n
\n
\n

\n

Just Call That C Function! Direct Access to External Libraries

\n

Let’s say you’ve got an external library written in C—or in some other language that can compile to a C-compatible library. In Version 13.3 there’s now foreign function interface (FFI) capability that allows you to directly call any function in the external library just using Wolfram Language code.

\n

Here’s a very trivial C function:

\n

Trivial C function

\n

This function happens to be included in compiled form in the compilerDemoBase library that’s part of Wolfram Language documentation. Given this library, you can use ForeignFunctionLoad to load the library and create a Wolfram Language function that directly calls the C addone function. All you need do is specify the library and C function, and then give the type signature for the function:

\n
\n
\n

\n

Now ff is a Wolfram Language function that calls the C addone function:

\n
\n
\n

\n

The C function addone happens to have a particularly simple type signature, that can immediately be represented in terms of compiler types that have direct analogs as Wolfram Language expressions. But in working with low-level languages, it’s very common to have to deal directly with raw memory, which is something that never happens when you’re purely working at the Wolfram Language level.

\n

So, for example, in the OpenSSL library there’s a function called RAND_bytes, whose C type signature is:

\n

RAND_bytes

\n

And the important thing to notice is that this contains a pointer to a buffer buf that gets filled by RAND_bytes. If you were calling RAND_bytes from C, you’d first allocate memory for this buffer, then—after calling RAND_bytes—read back whatever was written to the buffer. So how can you do something analogous when you’re calling RAND_bytes using ForeignFunction in Wolfram Language? In Version 13.3 we’re introducing a family of constructs for working with pointers and raw memory.

\n

So, for example, here’s how we can create a Wolfram Language foreign function corresponding to RAND_bytes:

\n
\n
\n

\n

But to actually use this, we need to be able to allocate the buffer, which in Version 13.3 we can do with RawMemoryAllocate:

\n
\n
\n

\n

This creates a buffer that can store 10 unsigned chars. Now we can call rb, giving it this buffer:

\n
\n
\n

\n

rb will fill the buffer—and then we can import the results back into Wolfram Language:

\n
\n
\n

\n

There’s some complicated stuff going on here. RawMemoryAllocate does ultimately allocate raw memory—and you can see its hex address in the symbolic object that’s returned. But RawMemoryAllocate creates a ManagedObject, which keeps track of whether it’s being referenced, and automatically frees the memory that’s been allocated when nothing references it anymore.

\n

Long ago languages like BASIC provided PEEK and POKE functions for reading and writing raw memory. It was always a dangerous thing to do—and it’s still dangerous. But it’s somewhat higher level in Wolfram Language, where in Version 13.3 there are now functions like RawMemoryRead and RawMemoryWrite. (For writing data into a buffer, RawMemoryExport is also relevant.)

\n

Most of the time it’s very convenient to deal with memory-managed ManagedObject constructs. But for the full low-level experience, Version 13.3 provides UnmanageObject, which disconnects automatic memory management for a managed object, and requires you to explicitly use RawMemoryFree to free it.

\n

One feature of C-like languages is the concept of a function pointer. And normally the function that the pointer is pointing to is just something like a C function. But in Version 13.3 there’s another possibility: it can be a function defined in Wolfram Language. Or, in other words, from within an external C function it’s possible to call back into the Wolfram Language.

\n

Let’s use this C program:

\n

C program

\n

You can actually compile it right from Wolfram Language using:

\n
\n
\n

\n

Now we load frun as a foreign function—with a type signature that uses \"OpaqueRawPointer\" to represent the function pointer:

\n
\n
\n

\n

What we need next is to create a function pointer that points to a callback to Wolfram Language:

\n
\n
\n

\n

The Wolfram Language function here is just Echo. But when we call frun with the cbfun function pointer we can see our C code calling back into Wolfram Language to evaluate Echo:

\n
\n
\n

\n

ForeignFunctionLoad provides an extremely convenient way to call external C-like functions directly from top-level Wolfram Language. But if you’re calling C-like functions a great many times, you’ll sometimes want to do it using compiled Wolfram Language code. And you can do this using the LibraryFunctionDeclaration mechanism that was introduced in Version 13.1. It’ll be more complicated to set up, and it’ll require an explicit compilation step, but there’ll be slightly less “overhead” in calling the external functions.

\n

The Advance of the Compiler Continues

\n

For several years we’ve had an ambitious project to develop a large-scale compiler for the Wolfram Language. And in each successive version we’re further extending and enhancing the compiler. In Version 13.3 we’ve managed to compile more of the compiler itself (which, needless to say, is written in Wolfram Language)—thereby making the compiler more efficient in compiling code. We’ve also enhanced the performance of the code generated by the compiler—particularly by optimizing memory management done in the compiled code.

\n

Over the past several versions we’ve been steadily making it possible to compile more and more of the Wolfram Language. But it’ll never make sense to compile everything—and in Version 13.3 we’re adding KernelEvaluate to make it more convenient to call back from compiled code to the Wolfram Language kernel.

\n

Here’s an example:

\n
\n
\n

\n

We’ve got an argument n that’s declared as being of type MachineInteger. Then we’re doing a computation on n in the kernel, and using TypeHint to specify that its result will be of type MachineInteger. There’s at least arithmetic going on outside the KernelEvaluate that can be compiled, even though the KernelEvaluate is just calling uncompiled code:

\n
\n
\n

\n

There are other enhancements to the compiler in Version 13.3 as well. For example, Cast now allows data types to be cast in a way that directly emulates what the C language does. There’s also now SequenceType, which is a type analogous to the Wolfram Language Sequence construct—and able to represent an arbitrary-length sequence of arguments to a function.

\n

And Much More…

\n

In addition to everything we’ve already discussed here, there are lots of other updates and enhancements in Version 13.3—as well as thousands of bug fixes.

\n

Some of the additions fill out corners of functionality, adding completeness or consistency. Statistical fitting functions like LinearModelFit now accept input in all various association etc. forms that machine learning functions like Classify accept. TourVideo now lets you “tour” GeoGraphics, with waypoints specified by geo positions. ByteArray now supports the “corner case” of zero-length byte arrays. The compiler can now handle byte array functions, and additional string functions. Nearly 40 additional special functions can now handle numeric interval computations. BarcodeImage adds support for UPCE and Code93 barcodes. SolidMechanicsPDEComponent adds support for the Yeoh hyperelastic model. And—twenty years after we first introduced export of SVG, there’s now built-in support for import of SVG not only to raster graphics, but also to vector graphics.

\n

There are new “utility” functions like RealValuedNumberQ and RealValuedNumericQ. There’s a new function FindImageShapes that begins the process of systematically finding geometrical forms in images. There are a number of new data structures—like \"SortedKeyStore\" and \"CuckooFilter\".

\n

There are also functions whose algorithms—and output—have been improved. ImageSaliencyFilter now uses new machine-learning-based methods. RSolveValue gives cleaner and smaller results for the important case of linear difference equations with constant coefficients.

\n

\n

\n\n

\n", - "category": "Mathematica", - "link": "https://writings.stephenwolfram.com/2023/06/llm-tech-and-a-lot-more-version-13-3-of-wolfram-language-and-mathematica/", - "creator": "Stephen Wolfram", - "pubDate": "Wed, 28 Jun 2023 18:02:59 +0000", - "enclosure": "", - "enclosureType": "", - "image": "", - "id": "", - "language": "en", - "folder": "", - "feed": "wolfram", - "read": false, - "favorite": false, - "created": false, - "tags": [], - "hash": "01c17fc689829b984c851232821521e3", - "highlights": [] - }, { "title": "Introducing Chat Notebooks: Integrating LLMs into the Notebook Paradigm", "description": "\"\"This is part of an ongoing series about our LLM-related technology:ChatGPT Gets Its “Wolfram Superpowers”!Instant Plugins for ChatGPT: Introducing the Wolfram ChatGPT Plugin KitThe New World of LLM Functions: Integrating LLM Technology into the Wolfram LanguagePrompts for Work & Play: Launching the Wolfram Prompt RepositoryIntroducing Chat Notebooks: Integrating LLMs into the Notebook Paradigm A New […]", @@ -285,28 +439,6 @@ "hash": "a976f2a02784da5470b3d8edb1a5ebaa", "highlights": [] }, - { - "title": "Prompts for Work & Play: Launching the Wolfram Prompt Repository", - "description": "\"\"This is part of an ongoing series about our LLM-related technology:ChatGPT Gets Its “Wolfram Superpowers”!Instant Plugins for ChatGPT: Introducing the Wolfram ChatGPT Plugin KitThe New World of LLM Functions: Integrating LLM Technology into the Wolfram LanguagePrompts for Work & Play: Launching the Wolfram Prompt RepositoryIntroducing Chat Notebooks: Integrating LLMs into the Notebook Paradigm Building Blocks […]", - "content": "\"\"\n

This is part of an ongoing series about our LLM-related technology:ChatGPT Gets Its “Wolfram Superpowers”!Instant Plugins for ChatGPT: Introducing the Wolfram ChatGPT Plugin KitThe New World of LLM Functions: Integrating LLM Technology into the Wolfram LanguagePrompts for Work & Play: Launching the Wolfram Prompt RepositoryIntroducing Chat Notebooks: Integrating LLMs into the Notebook Paradigm

\n

\"Prompts

\n

Building Blocks of “LLM Programming”

\n

Prompts are how one channels an LLM to do something. LLMs in a sense always have lots of “latent capability” (e.g. from their training on billions of webpages). But prompts—in a way that’s still scientifically mysterious—are what let one “engineer” what part of that capability to bring out.

\n
\n

The functionality described here will be built into the upcoming version of Wolfram Language (Version 13.3). To install it in the now-current version (Version 13.2), use

\n
PacletInstall[\"Wolfram/Chatbook\"]
\n

and

\n
PacletInstall[\"Wolfram/LLMFunctions\"].
\n

You will also need an API key for the OpenAI LLM or another LLM.

\n
\n

There are many different ways to use prompts. One can use them, for example, to tell an LLM to “adopt a particular persona”. One can use them to effectively get the LLM to “apply a certain function” to its input. And one can use them to get the LLM to frame its output in a particular way, or to call out to tools in a certain way.

\n

And much as functions are the building blocks for computational programming—say in the Wolfram Language—so prompts are the building blocks for “LLM programming”. And—much like functions—there are prompts that correspond to “lumps of functionality” that one can expect will be repeatedly used.

\n

Today we’re launching the Wolfram Prompt Repository to provide a curated collection of useful community-contributed prompts—set up to be seamlessly accessible both interactively in Chat Notebooks and programmatically in things like LLMFunction:

\n

Wolfram Prompt Repository home page

\n

As a first example, let’s talk about the "Yoda" prompt, that’s listed as a “persona prompt”. Here’s its page:

\n

Wolfram Prompt Repository Yoda persona

\n

So how do we use this prompt? If we’re using a Chat Notebook (say obtained from File > New > Chat-Driven Notebook) then just typing @Yoda will “invoke” the Yoda persona:

\n

Should I eat a piece of chocolate now?

\n

At a programmatic level, one can “invoke the persona” through LLMPrompt (the result is different because there’s by default randomness involved):

\n

\n\n\n\n\n\n\n
\n
\n\t\t\t\t\t

\n
\n\t\t\t\t\t&#10005

\n
\n

\n
\n

There are several initial categories of prompts in the Prompt Repository:

\n

\n

There’s a certain amount of crossover between these categories (and there’ll be more categories in the future—particularly related to generating computable results, and calling computational tools). But there are different ways to use prompts in different categories.

\n

Function prompts are all about taking existing text, and transforming it in some way. We can do this programmatically using LLMResourceFunction:

\n

\n\n\n\n\n\n\n
\n
\n\t\t\t\t\t\n\t\t\t\t
\n
\n\t\t\t\t\t&#10005

\n
\n

\n
\n

\n

We can also do it in a Chat Notebook using !ActiveVoiceRephrase, with the shorthand ^ to refer to text in the cell above, and > to refer to text in the current chat cell:

\n

The AI was switched off by him.

\n

Modifier prompts have to do with specifying how to modify output coming from the LLM. In this case, the LLM typically produces a whole mini-essay:

\n

\n\n\n\n\n\n\n
\n
\n\t\t\t\t\t

\n
\n\t\t\t\t\t&#10005

\n
\n

\n
\n

But with the YesNo modifier prompt, it simply says “Yes”:

\n

\n\n\n\n\n\n\n
\n
\n\t\t\t\t\t\n\t\t\t\t
\n
\n\t\t\t\t\t&#10005

\n
\n

\n
\n

\n

In a Chat Notebook, you can introduce a modifier prompt using #:

\n

Is a watermelon bigger than a human head?

\n

Quite often you’ll want several modifier prompts:

\n

Is a watermelon bigger than a human head?

\n

What Does Having a Prompt Repository Do for One?

\n

LLMs are powerful things. And one might wonder why, if one has a description for a prompt, one can’t just use that description directly, rather than having to store a prewritten prompt. Well, sometimes just using the description will indeed work fine. But often it won’t. Sometimes that’s because one needs to clarify further what one wants. Sometimes it’s because there are not-immediately-obvious corner cases to cover. And sometimes there’s just a certain amount of “LLM wrangling” to be done. And this all adds up to the need to do at least some “prompt engineering” on almost any prompt.

\n

The YesNo modifier prompt from above is currently fairly simple:

\n

\n\n\n\n\n\n\n
\n
\n\t\t\t\t\t\n\t\t\t\t
\n
\n\t\t\t\t\t&#10005

\n
\n

\n
\n

\n

But it’s still already complicated enough one that doesn’t want to have to repeat it every time one’s trying to force a yes/no answer. And no doubt there’ll be subsequent versions of this prompt (that, yes, will have versioning handled seamlessly by the Prompt Repository) that will get increasingly elaborate, as more cases show up, and more prompt engineering gets done to address them.

\n

Many of the prompts in the Prompt Repository even now are considerably more complicated. Some contain typical “general prompt engineering”, but others contain for example special information that the LLM doesn’t intrinsically know, or detailed examples that home in on what one wants to have happen.

\n

In the simplest cases, prompts (like the YesNo one above) are just plain pieces of text. But often they contain parameters, or have additional computational or other content. And a key feature of the Wolfram Prompt Repository is that it can handle this ancillary material, ultimately by representing everything using Wolfram Language symbolic expressions.

\n

As we discussed in connection with LLMFunction, etc. in another post, the core “textual” part of a prompt is represented by a symbolic StringTemplate that immediately allows positional or named parameters. Then there can be an interpreter that applies a Wolfram Language Interpreter function to the raw textual output of the LLM—transforming it from plain text to a computable symbolic expression. More sophisticatedly, there can also be specifications of tools that the LLM can call (represented symbolically as LLMTool constructs), as well as other information about the required LLM configuration (represented by an LLMConfiguration object). But the key point is that all of this is automatically “packaged up” in the Prompt Repository.

\n

But what actually is the Wolfram Prompt Repository? Well, ultimately it’s just part of the general Wolfram Resource System—the same one that’s used for the Wolfram Function Repository, Wolfram Data Repository, Wolfram Neural Net Repository, Wolfram Notebook Archive, and many other things.

\n

And so, for example, the "Yoda" prompt is in the end represented by a symbolic ResourceObject that’s part of the Resource System:

\n

\n\n\n\n\n\n\n
\n
\n\t\t\t\t\t\n\t\t\t\t
\n
\n\t\t\t\t\t&#10005

\n
\n

\n
\n

\n

Open up the display of this resource object, and we’ll immediately see various pieces of metadata (and a link to documentation), as well as the ultimate canonical UUID of the object:

\n

\n\n\n\n\n\n\n
\n
\n\t\t\t\t\t\n\t\t\t\t
\n
\n\t\t\t\t\t&#10005

\n
\n

\n
\n

\n

Everything that needs to use the prompt—Chat Notebooks, LLMPrompt, LLMResourceFunction, etc.—just works by accessing appropriate parts of the ResourceObject, so that for example the “hero image” (used for the persona icon) is retrieved like this:

\n

\n\n\n\n\n\n\n
\n
\n\t\t\t\t\t\n\t\t\t\t
\n
\n\t\t\t\t\t&#10005

\n
\n

\n
\n

\n

There’s a lot of important infrastructure that “comes for free” from the general Wolfram Resource System—like efficient caching, automatic updating, documentation access, etc. And things like LLMPrompt follow the exact same approach as things like NetModel in being able to immediately reference entries in a repository.

\n

What’s in the Prompt Repository So Far

\n

We haven’t been working on the Wolfram Prompt Repository for very long, and we’re just opening it up for outside contributions now. But already the Repository contains (as of today) about two hundred prompts. So what are they so far? Well, it’s a range. From “just for fun”, to very practical, useful and sometimes quite technical.

\n

In the “just for fun” category, there are all sorts of personas, including:

\n

In a sentence or two, what are you good for?

\n

In a sentence or two, what are you good for?

\n

In a sentence or two, what are you good for?

\n

In a sentence or two, what are you good for?

\n

In a sentence or two, what are you good for?

\n

There are also slightly more “practical” personas—like SupportiveFriend and SportsCoach too—which can be more helpful sometimes than others:

\n

I'm a bit tired of writing all these posts.

\n

Then there are “functional” ones like NutritionistBot, etc.—though most of these are still very much under development, and will advance considerably when they are hooked up to tools, so they’re able to access accurate computable knowledge, external data, etc.

\n

But the largest category of prompts so far in the Prompt Repository are function prompts: prompts which take text you supply, and do operations on it. Some are based on straightforward (at least for an LLM) text transformations:

\n

There are many prompts available.

\n

AIs are cool.

\n

!ShorterRephrase

\n

I hope you can come to my party.

\n

There are all sorts of text transformations that can be useful:

\n

Stephen Wolfram lives in Concord, MA

\n

A curated collection of prompts, personas, functions, & more for LLMs

\n

Some function prompts—like Summarize, TLDR, NarrativeToResume, etc.—can be very useful in making text easier to assimilate. And the same is true of things like LegalDejargonize, MedicalDejargonize, ScientificDejargonize, BizDejargonize—or, depending on your background, the *Jargonize versions of these:

\n

The rat ignored the maze and decided to eat the cheese

\n

Some text transformation prompts seem to perhaps make use of a little more “cultural awareness” on the part of the LLM:

\n

WOLFRAM PROMPT REPOSITORY (UNDER CONSTRUCTION)

\n

WOLFRAM PROMPT REPOSITORY (UNDER CONSTRUCTION)

\n

AIs provide excellent programming advice.

\n

An app to let cats interact with chatbots

\n

A dinosaur that can roll itself up in a ball

\n

Some function prompts are for analyzing text (or, for example, for doing educational assessments):

\n

I woz going to them place when I want stop

\n

I believe plants should be the only organisms on the planet

\n

Sometimes prompts are most useful when they’re applied programmatically. Here are two synthesized sentences:

\n

\n\n\n\n\n\n\n
\n
\n\n\t\t\t\t
\n
\n\t\t\t\t\t&#10005

\n
\n

\n
\n

\n

Now we can use the DocumentCompare prompt to compare them (something that might, for example, be useful in regression testing):

\n

\n\n\n\n\n\n\n
\n
\n\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t
\n
\n\t\t\t\t\t&#10005

\n
\n

\n
\n

\n

There are other kinds of “text analysis” prompts, like GlossaryGenerate, CharacterList (characters mentioned in a piece of fiction) and LOCTopicSuggest (Library of Congress book topics):

\n

What is ChatGPT Doing and Why Does It Work?

\n

There are lots of other function prompts already in the Prompt Repository. Some—like FilenameSuggest and CodeImport—are aimed at doing computational tasks. Others make use of common-sense knowledge. And some are just fun. But, yes, writing good prompts is hard—and what’s in the Prompt Repository will gradually improve. And when there are bugs, they can be pretty weird. Like PunAbout is supposed to generate a pun about some topic, but here it decides to protest and say it must generate three:

\n

Parrot

\n

The final category of prompts currently in the Prompt Repository are modifier prompts, intended as a way to modify the output generated by the LLM. Sometimes modifier prompts can be essentially textual:

\n

How many legs does a spider have?

\n

How many legs does a spider have?

\n

How many legs does a spider have?

\n

But often modifier prompts are intended to create output in a particular form, suitable, for example, for interpretation by an interpreter in LLMFunction, etc.:

\n

How many legs does a spider have?

\n

Number of legs for the 5 common invertebrates

\n

Are AIs good?

\n

So far the modifier prompts in the Prompt Repository are fairly simple. But once there are prompts that make use of tools (i.e. call back into Wolfram Language during the generation process) we can expect modifier prompts that are much more sophisticated, useful and robust.

\n

Adding Your Own Prompts

\n

The Wolfram Prompt Repository is set up to be a curated public collection of prompts where it’s easy for anyone to submit a new prompt. But—as we’ll explain—you can also use the framework of the Prompt Repository to store “private” prompts, or share them with specific groups.

\n

So how do you define a new prompt in the Prompt Repository framework? The easiest way is to fill out a Prompt Resource Definition Notebook:

\n

Prompt Resource Definition Notebook

\n

You can get this notebook here, or from the Submit a Prompt button at the top of the Prompt Repository website, or by evaluating CreateNotebook[\"PromptResource\"].

\n

The setup is directly analogous to the ones for the Wolfram Function Repository, Wolfram Data Repository, Wolfram Neural Net Repository, etc. And once you’ve filled out the Definition Notebook, you’ve got various choices:

\n

Definition Notebook deployment options

\n

Submit to Repository sends the prompt to our curation team for our official Wolfram Prompt Repository; Deploy deploys it for your own use, and for people (or AIs) you choose to share it with. If you’re using the prompt “privately”, you can refer to it using its URI or other identifier (if you use ResourceRegister you can also just refer to it by the name you give it).

\n

OK, so what do you need to specify in the Definition Notebook? The most important part is the actual prompt itself. And quite often the prompt may just be a (carefully crafted) piece of plain text. But ultimately—as discussed elsewhere—a prompt is a symbolic template, that can include parameters. And you can insert parameters into a prompt using “template slots”:

\n

Template slots

\n

(Template Expression lets you insert Wolfram Language code that will be evaluated when the prompt is applied—so you can for example include the current time with Now.)

\n

In simple cases, all you’ll need to specify is the “pure prompt”. But in more sophisticated cases you’ll also want to specify some “outside the prompt” information—and there are some sections for this in the Definition Notebook:

\n

Definition Notebook sections

\n

Chat-Related Features is most relevant for personas:

\n

Chat features

\n

You can give an icon that will appear in Chat Notebooks for that persona. And then you can give Wolfram Language functions which are to be applied to the contents of each chat cell before it is fed to the LLM (“Cell Processing Function”), and to the output generated by the LLM (“Cell Post Evaluation Function”). These functions are useful in transforming material to and from the plain text consumed by the LLM, and supporting richer display and computational structures.

\n

Programmatic Features is particularly relevant for function prompts, and for the way prompts are used in LLMResourceFunction etc.:

\n

Programmatic Features

\n

There’s “function-oriented documentation” (analogous to what’s used for built-in Wolfram Language functions, or for functions in the Wolfram Function Repository). And then there’s the Output Interpreter: a function to be applied to the textual output of the LLM, to generate the actual expression that will be returned by LLMResourceFunction, or for formatting in a Chat Notebook.

\n

What about the LLM Configuration section?

\n

LLM configuration options

\n

The first thing it does is to define tools that can be requested by the LLM when this prompt is used. We’ll discuss tools in another post. But as we’ve mentioned several times, they’re a way of having the LLM call Wolfram Language to get particular computational results that are then returned to the LLM. The other part of the LLM Configuration section is a more general LLMConfiguration specification, which can include “temperature” settings, the requirement of using a particular underlying model (e.g. GPT-4), etc.

\n

What else is in the Definition Notebook? There are two main documentation sections: one for Chat Examples, and one for Programmatic Examples. Then there are various kinds of metadata.

\n

Of course, at the very top of the Definition Notebook there’s another very important thing: the name you specify for the prompt. And here—with the initial prompts we’ve put into the Prompt Repository—we’ve started to develop some conventions. Following typical Wolfram Language usage we’re “camel-casing” names (so it’s "TitleSuggest" not "title suggest"). Then we try to use different grammatical forms for different kinds of prompts. For personas we try to use noun phrases (like "Cheerleader" or "SommelierBot"). For functions we usually try to use verb phrases (like "Summarize" or "HypeUp"). And for modifiers we try to use past-tense verb forms (like "Translated" or "HaikuStyled").

\n

The overall goal with prompt names—like with ordinary Wolfram Language function names—is to provide a summary of what the prompt does, in a form that’s short enough that it appears a bit like a word in computational language input, chats, etc.

\n

OK, so let’s say you’ve filled out a Definition Notebook, and you Deploy it. You’ll get a webpage that includes the documentation you’ve given—and looks pretty much like any of the pages in the Wolfram Prompt Repository. And now if you want to use the prompt, you can just click the appropriate place on the webpage, and you’ll get a copyable version that you can immediately paste into an input cell, a chat cell, etc. (Within a Chat Notebook there’s an even more direct mechanism: in the chat icon menu, go to Add & Manage Personas, and when you browse the Prompt Repository, there’ll be an Install button that will automatically install a persona.)

\n

A Language of Prompts

\n

LLMs fundamentally deal with natural language of the kind we humans normally use. But when we set up a named prompt we’re in a sense defining a “higher-level word” that can be used to “communicate” with the LLM—at the least with the kind of “harness” that LLMFunction, Chat Notebooks, etc. provide. And we can then imagine in effect “talking in prompts” and for example building up more and more levels of prompts.

\n

Of course, we already have a major example of something that at least in outline is similar: the way in which over the past few decades we’ve been able to progressively construct a whole tower of functionality from the built-in functions in the Wolfram Language. There’s an important difference, however: in defining built-in functions we’re always working on “solid ground”, with precise (carefully designed) computational specifications for what we’re doing. In setting up prompts for an LLM, try as we might to “write the prompts well” we’re in a sense ultimately “at the mercy of the LLM” and how it chooses to handle things.

\n

It feels in some ways like the difference between dealing with engineering systems and with human organizations. In both cases one can set up plans and procedures for what should happen. In the engineering case, however, one can expect that (at least at the level of individual operations) the system will do exactly as one says. In the human case—well, all kinds of things can happen. That is not to say that amazing results can’t be achieved by human organizations; history clearly shows they can.

\n

But—as someone who’s managed (human) organizations now for more than four decades—I think I can say the “rhythm” and practices of dealing with human organizations differ in significant ways from those for technological ones. There’s still a definite pattern of what to do, but it’s different, with a different way of going back and forth to get results, different approaches to “debugging”, etc.

\n

How will it work with prompts? It’s something we still need to get used to. But for me there’s immediately another useful “comparable”. Back in the early 2000s we’d had a decade or two of experience in developing what’s now Wolfram Language, with its precise formal specifications, carefully designed with consistency in mind. But then we started working on Wolfram|Alpha—where now we wanted a system that would just deal with whatever input someone might provide. At first it was jarring. How could we develop any kind of manageable system based on boatloads of potentially incompatible heuristics? It took a little while, but eventually we realized that when everything is a heuristic there’s a certain pattern and structure to that. And over time the development we do has become progressively more systematic.

\n

And so, I expect, it will be with prompts. In the Wolfram Prompt Repository today, we have a collection of prompts that cover a variety of areas, but are almost all “first level”, in the sense that they depend only on the base LLM, and not on other prompts. But over time I expect there’ll be whole hierarchies of prompts that develop (including metaprompts for building prompts, etc. ) And indeed I won’t be surprised if in this way all sorts of “repeatable lumps of functionality” are found, that actually can be implemented in a direct computational way, without depending on LLMs. (And, yes, this may well go through the kind of “semantic grammar” structure that I’ve discussed elsewhere.)

\n

But as of now, we’re still just at the point of first launching the Wolfram Prompt Repository, and beginning the process of understanding the range of things—both useful and fun—that can be achieved with prompts. But it’s already clear that there’s going to be a very interesting world of prompts—and a progressive development of “prompt language” that in some ways will probably parallel (though at a considerably faster rate) the historical development of ordinary human languages.

\n

It’s going to be a community effort—just as it is with ordinary human languages—to explore and build out “prompt language”. And now that it’s launched, I’m excited to see how people will use our Prompt Repository, and just what remarkable things end up being possible through it.

\n", - "category": "Artificial Intelligence", - "link": "https://writings.stephenwolfram.com/2023/06/prompts-for-work-play-launching-the-wolfram-prompt-repository/", - "creator": "Stephen Wolfram", - "pubDate": "Thu, 08 Jun 2023 01:54:36 +0000", - "enclosure": "", - "enclosureType": "", - "image": "", - "id": "", - "language": "en", - "folder": "", - "feed": "wolfram", - "read": false, - "favorite": false, - "created": false, - "tags": [], - "hash": "97c48c180c49516491bb916a04be7cd1", - "highlights": [] - }, { "title": "The New World of LLM Functions: Integrating LLM Technology into the Wolfram Language", "description": "\"\"This is part of an ongoing series about our LLM-related technology:ChatGPT Gets Its “Wolfram Superpowers”!Instant Plugins for ChatGPT: Introducing the Wolfram ChatGPT Plugin KitThe New World of LLM Functions: Integrating LLM Technology into the Wolfram LanguagePrompts for Work & Play: Launching the Wolfram Prompt RepositoryIntroducing Chat Notebooks: Integrating LLMs into the Notebook Paradigm Turning LLM […]", @@ -606,6 +738,94 @@ "image": null, "description": "xkcd.com: A webcomic of romance and math humor.", "items": [ + { + "title": "Interoperability", + "description": "\"We're", + "content": "\"We're", + "category": "", + "link": "https://xkcd.com/3105/", + "creator": "", + "pubDate": "Fri, 20 Jun 2025 04:00:00 -0000", + "enclosure": "", + "enclosureType": "", + "image": "", + "id": "", + "language": "en", + "folder": "", + "feed": "xkcd", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "46de4f2ae9dcf650a00f963671b99dd6", + "highlights": [] + }, + { + "title": "Tukey", + "description": "\"Numbers", + "content": "\"Numbers", + "category": "", + "link": "https://xkcd.com/3104/", + "creator": "", + "pubDate": "Wed, 18 Jun 2025 04:00:00 -0000", + "enclosure": "", + "enclosureType": "", + "image": "", + "id": "", + "language": "en", + "folder": "", + "feed": "xkcd", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "63477b5c770ebc121a54b4b2adcea61f", + "highlights": [] + }, + { + "title": "Exoplanet System", + "description": "\"Sure,", + "content": "\"Sure,", + "category": "", + "link": "https://xkcd.com/3103/", + "creator": "", + "pubDate": "Mon, 16 Jun 2025 04:00:00 -0000", + "enclosure": "", + "enclosureType": "", + "image": "", + "id": "", + "language": "en", + "folder": "", + "feed": "xkcd", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "3239343b4a81b3ee5ca6949de380862e", + "highlights": [] + }, + { + "title": "Reading a Big Number", + "description": "\"[desperately]", + "content": "\"[desperately]", + "category": "", + "link": "https://xkcd.com/3102/", + "creator": "", + "pubDate": "Fri, 13 Jun 2025 04:00:00 -0000", + "enclosure": "", + "enclosureType": "", + "image": "", + "id": "", + "language": "en", + "folder": "", + "feed": "xkcd", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "b72d8c308901369bb8ec09779f9b183a", + "highlights": [] + }, { "title": "Research Account", "description": "\"Focus", @@ -738,28 +958,6 @@ "hash": "ba6481e8455e0a70f93c50990f71d6a2", "highlights": [] }, - { - "title": "OSTE – Le scanner de vulns qui combine Nikto, ZAP, Nuclei, SkipFish, et Wapiti", - "description": "OSTE est un scanner de sécurité open-source qui simplifie les tests dynamiques des applications, combinant plusieurs scanners DAST tels que Nikto Scanner, OWASP ZAP, Nuclei, SkipFish et Wapiti. Il se concentre sur les vulnérabilités d'injection Web et offre une interface conviviale. OSTE fonctionne sur plusieurs plateformes, principalement Kali Linux.", - "content": "

\"\"

\n

Si vous vous intéressez un peu à la sécurité informatique, je vous présente aujourd’hui OSTE qui est ce qu’on pourrait appeler un Metascanner.

\n\n\n\n
\r\n

Alors qu’est-ce qu’un Metascanner ?

\n\n\n\n

Eh bien il s’agit d’un scanner de vulnérabilité web qui combine différents outils tels que Nikto, zap de l’OWASP, Nucléi, SkipFish ou encore Wapiti.

\n\n\n\n

L’intérêt de cet outil c’est qu’il offre une interface graphique très user friendly qui permet de consulter les rapports de scan, mais également de les lancer. Injections SQL, XSS, XML, HTML ou encore des injections liées à des commandes spécifiques au système d’exploitation visé. Chacun des scanners DAST (Dynamic Application Security Testing) intégrés fournit des listes de vulnérabilités pour vous aider à identifier et corriger les problèmes potentiels.

\n\n\n
\n
\"\"
\n\n\n

Pour l’installer, vous aurez besoin de tous les outils que je viens de vous citer, mais si vous utilisez Kali Linux vous n’aurez pas de soucis puisque tout ça est déjà présent dans la distrib. Sinon il faudra les installer manuellement.

\n\n\n\n
\r\n

Ensuite il ne vous restera plus qu’à cloner le dépôt sur votre machine et à lancer la commande

\n\n\n\n
python3 metascan.py
\n\n\n\n

Vous pourrez alors lancer des scans, charger les résultats, les exporter, les consulter directement depuis l’interface graphique.

\n\n\n\n

Vous l’aurez compris, OSTE est un outil fantastique pour simplifier l’évaluation de la cyber sécurité. N’oubliez pas quand même que c’est destiné à usages éducatifs ou dans le cadre de mission d’audits pour lesquelles vous avez été mandaté.

\n\n\n\n

Si vous voulez en savoir plus, cliquez ici.

\n", - "category": "Sécurité", - "link": "https://korben.info/scanner-oste-tests-securite-dynamiques-applications-web.html", - "creator": "Korben", - "pubDate": "Sun, 28 Jan 2024 08:00:00 +0000", - "enclosure": "", - "enclosureType": "", - "image": "", - "id": "", - "language": "fr", - "folder": "", - "feed": "korben.info", - "read": false, - "favorite": false, - "created": false, - "tags": [], - "hash": "b1c8d0b3098030bb9417cfa55fdeab34", - "highlights": [] - }, { "title": "Fast Radio Bursts", "description": "\"Dr.", @@ -3875,6 +4073,446 @@ "image": "\n\t", "description": "Upgrade your mind", "items": [ + { + "title": "NotepadNext - Du Notepad++ enfin cross-platform ?", + "description": "

Bon, faut que je vous avoue un truc inavouable. Même si j’adore Notepad++, je ne l’utilise plus depuis que je suis passé sous Mac. Et ça me manque tellement que j’ai failli installer une VM Windows rien que pour ça.

\n

Heureusement, je viens de découvrir NotepadNext qui est une version de Notepad++ qui marche partout, même sur Mac et Linux.

\n

Le développeur dail8859 a eu cette idée de génie, réimplémenter complètement Notepad++ avec le framework Qt pour qu’il tourne sur toutes les plateformes.

", + "content": "

Bon, faut que je vous avoue un truc inavouable. Même si j’adore Notepad++, je ne l’utilise plus depuis que je suis passé sous Mac. Et ça me manque tellement que j’ai failli installer une VM Windows rien que pour ça.

\n

Heureusement, je viens de découvrir NotepadNext qui est une version de Notepad++ qui marche partout, même sur Mac et Linux.

\n

Le développeur dail8859 a eu cette idée de génie, réimplémenter complètement Notepad++ avec le framework Qt pour qu’il tourne sur toutes les plateformes.

", + "category": "outils-services", + "link": "https://korben.info/notepadnext-notepad-cross-platform.html", + "creator": "Korben", + "pubDate": "Sun, 22 Jun 2025 09:22:48 +0200", + "enclosure": "", + "enclosureType": "", + "image": "", + "id": "", + "language": "fr", + "folder": "", + "feed": "korben.info", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "76a619e259776fcb0aad1d0e648c5ebb", + "highlights": [] + }, + { + "title": "Cloudflare bloque une attaque DDoS record de 7,3 Tb/s", + "description": "

Vous pensez à l’avenir ? Moi tout le temps ! Je me dis qu’on n’est pas au bout de nos suprise… Peut-être qu’en 2030, notre grille-pain participera à une attaque de 50 Tb/s contre Netflix ou que notre frigo connecté sera en train de DDoS la NASA pendant qu’on est parti chercher du lait. On n’en sait rien, mais ce qui est sûr c’est que pendant que l’humanité sera en train de découvrir que les IA se battent vraiment en secret depuis des années à coups de pétaoctets, nous on sera toujours en train de galérer avec la 4G dans le métro.

", + "content": "

Vous pensez à l’avenir ? Moi tout le temps ! Je me dis qu’on n’est pas au bout de nos suprise… Peut-être qu’en 2030, notre grille-pain participera à une attaque de 50 Tb/s contre Netflix ou que notre frigo connecté sera en train de DDoS la NASA pendant qu’on est parti chercher du lait. On n’en sait rien, mais ce qui est sûr c’est que pendant que l’humanité sera en train de découvrir que les IA se battent vraiment en secret depuis des années à coups de pétaoctets, nous on sera toujours en train de galérer avec la 4G dans le métro.

", + "category": "securite-vie-privee", + "link": "https://korben.info/cloudflare-bloque-attaque-ddos-record-7-3-tbps.html", + "creator": "Korben", + "pubDate": "Sun, 22 Jun 2025 07:01:05 +0200", + "enclosure": "", + "enclosureType": "", + "image": "", + "id": "", + "language": "fr", + "folder": "", + "feed": "korben.info", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "47402a438386a2c67ddad7c85af9e13f", + "highlights": [] + }, + { + "title": "2025 : La France découvre la censure… et le VPN devient mainstream", + "description": "

– Article en partenariat avec Surfshark

\n

C’est officiel, depuis le 4 juin, la Chine France a décidé de jouer les parents stricts du web. Un matin, tu veux juste “faire tes recherches” sur Pornhub ou YouPorn, et paf, “Ce site n’est pas accessible depuis votre pays”. Derrière ce blocage, des débats sans fin sur la protection des mineurs, la morale, la politique, bref, tout ce qui fait qu’on se retrouve à devoir ruser pour accéder à ce qu’on veut. Résultat ? Les VPN ont déjà explosé dans l’Hexagone, et Surfshark s’est imposé comme l’une des planches de salut préférées des internautes frustrés.

", + "content": "

– Article en partenariat avec Surfshark

\n

C’est officiel, depuis le 4 juin, la Chine France a décidé de jouer les parents stricts du web. Un matin, tu veux juste “faire tes recherches” sur Pornhub ou YouPorn, et paf, “Ce site n’est pas accessible depuis votre pays”. Derrière ce blocage, des débats sans fin sur la protection des mineurs, la morale, la politique, bref, tout ce qui fait qu’on se retrouve à devoir ruser pour accéder à ce qu’on veut. Résultat ? Les VPN ont déjà explosé dans l’Hexagone, et Surfshark s’est imposé comme l’une des planches de salut préférées des internautes frustrés.

", + "category": "securite-vie-privee", + "link": "https://korben.info/la-france-decouvre-la-censure.html", + "creator": "Korben", + "pubDate": "Thu, 19 Jun 2025 16:25:17 +0200", + "enclosure": "", + "enclosureType": "", + "image": "", + "id": "", + "language": "fr", + "folder": "", + "feed": "korben.info", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "c74638ab00016f2161e7f7c88b11d9e3", + "highlights": [] + }, + { + "title": "Scrappy - La magie du développement fait maison", + "description": "

65 milliards de dollars, c’est le marché estimé du low-code en 2025. Incroyable !! Qui aurait pu se doutait que ça reviendrait en force alors que dans les années 90, nos oncles et grand mères créait des apps maison avec HyperCard en moins de 15 minutes ?

\n

Et aujourd’hui, créer la même chose demande 3 frameworks, 2 bases de données et un diplôme d’ingénieur. Heureusement, 2 développeurs ont décidé de ramener la magie de l’époque avec Scrappy.

", + "content": "

65 milliards de dollars, c’est le marché estimé du low-code en 2025. Incroyable !! Qui aurait pu se doutait que ça reviendrait en force alors que dans les années 90, nos oncles et grand mères créait des apps maison avec HyperCard en moins de 15 minutes ?

\n

Et aujourd’hui, créer la même chose demande 3 frameworks, 2 bases de données et un diplôme d’ingénieur. Heureusement, 2 développeurs ont décidé de ramener la magie de l’époque avec Scrappy.

", + "category": "outils-services", + "link": "https://korben.info/scrappy-apps-maison-developpement-accessible.html", + "creator": "Korben", + "pubDate": "Thu, 19 Jun 2025 12:11:50 +0200", + "enclosure": "", + "enclosureType": "", + "image": "", + "id": "", + "language": "fr", + "folder": "", + "feed": "korben.info", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "da45f40689366727f28df10414b78ce9", + "highlights": [] + }, + { + "title": "Faille Linux critique - Vérifiez et patchez d'urgence", + "description": "

Une question que je me pose avec Linux, c’est quand est-ce qu’on va arrêter de découvrir des failles qui transforment n’importe quel utilisateur lambda en super master absolu du système ?? Eh bien la réponse est surement “Pas aujourd’hui !!” parce que Qualys vient de sortir 2 CVE qui mettent à poil toutes les principales distros.

\n

Je vous avoue que quand j’ai lu le rapport de Qualys TRU, j’ai d’abord cru à une blague. 2 lignes de code, 3 secondes d’exécution, et hop, vous voilà root sur Ubuntu, Debian, Fedora et openSUSE. Les CVE-2025-6018 et CVE-2025-6019, qui viennent d’être découvertes, c’est pas juste 2 failles de plus dans la longue liste des vulnérabilités Linux. C’est une combinaison dévastatrice qui exploite des services tellement basiques qu’ils tournent sur pratiquement toutes les installations Linux par défaut. Et ça concerne UDisks (le daemon de gestion du stockage) et PAM (les modules d’authentification), donc autant dire des composants qu’on trouve partout.

", + "content": "

Une question que je me pose avec Linux, c’est quand est-ce qu’on va arrêter de découvrir des failles qui transforment n’importe quel utilisateur lambda en super master absolu du système ?? Eh bien la réponse est surement “Pas aujourd’hui !!” parce que Qualys vient de sortir 2 CVE qui mettent à poil toutes les principales distros.

\n

Je vous avoue que quand j’ai lu le rapport de Qualys TRU, j’ai d’abord cru à une blague. 2 lignes de code, 3 secondes d’exécution, et hop, vous voilà root sur Ubuntu, Debian, Fedora et openSUSE. Les CVE-2025-6018 et CVE-2025-6019, qui viennent d’être découvertes, c’est pas juste 2 failles de plus dans la longue liste des vulnérabilités Linux. C’est une combinaison dévastatrice qui exploite des services tellement basiques qu’ils tournent sur pratiquement toutes les installations Linux par défaut. Et ça concerne UDisks (le daemon de gestion du stockage) et PAM (les modules d’authentification), donc autant dire des composants qu’on trouve partout.

", + "category": "securite-vie-privee", + "link": "https://korben.info/faille-critique-linux-udisks-acces-root.html", + "creator": "Korben", + "pubDate": "Thu, 19 Jun 2025 11:42:35 +0200", + "enclosure": "", + "enclosureType": "", + "image": "", + "id": "", + "language": "fr", + "folder": "", + "feed": "korben.info", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "d3313eb106d83d440d5bde5599c06e78", + "highlights": [] + }, + { + "title": "Bye bye les pubs et merci Patreon !", + "description": "

En 2005, quand j’ai mis en place Google Adsense, c’était magique. Sans rien faire, je gagnais quelques euros par mois avec mon site et pour la première fois, je me suis dit que je pourrais peut-être un jour en vivre. Ces bannières pub, ça a ouvert beaucoup de possibilités à pas mal de monde, car on pouvait enfin vivre (ou arrondir les fins de mois) grâce à son site web sans que ce soit compliqué.

", + "content": "

En 2005, quand j’ai mis en place Google Adsense, c’était magique. Sans rien faire, je gagnais quelques euros par mois avec mon site et pour la première fois, je me suis dit que je pourrais peut-être un jour en vivre. Ces bannières pub, ça a ouvert beaucoup de possibilités à pas mal de monde, car on pouvait enfin vivre (ou arrondir les fins de mois) grâce à son site web sans que ce soit compliqué.

", + "category": "internet-reseaux", + "link": "https://korben.info/pub-programmatique-suppression-patreon.html", + "creator": "Korben", + "pubDate": "Thu, 19 Jun 2025 08:34:20 +0200", + "enclosure": "", + "enclosureType": "", + "image": "", + "id": "", + "language": "fr", + "folder": "", + "feed": "korben.info", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "5eae4fb4ad1d17e4057481bcd2d02729", + "highlights": [] + }, + { + "title": "Midjourney Video Model V1 - L'IA qui fait bouger vos images", + "description": "

La question existentielle qu’on se pose tous avec l’IA générative, c’est quand est-ce qu’on va pouvoir enfin créer des films entiers depuis son canapé ?

\n

Ça tombe bien car Midjourney vient de faire un pas de géant dans cette direction avec le lancement de son modèle vidéo V1, que j’ai pu tester dans tous les sens. Je vous spoile un peu, c’est impressionnant mais pas encore prêt pour les Oscars.

\n

Midjourney a toujours été la référence en génération d’images IA, mais l’équipe avait une vision bien plus large dès le départ. Leur objectif ultime ? Créer des systèmes capables de simuler des mondes 3D en temps réel où vous pourrez vous déplacer, interagir et faire évoluer l’environnement à la volée. Pour y arriver, ils construisent méthodiquement leurs building blocks d’abord les images (ça c’est fait), puis les faire bouger (c’est fait aussi), ensuite la 3D, et enfin le temps réel.

", + "content": "

La question existentielle qu’on se pose tous avec l’IA générative, c’est quand est-ce qu’on va pouvoir enfin créer des films entiers depuis son canapé ?

\n

Ça tombe bien car Midjourney vient de faire un pas de géant dans cette direction avec le lancement de son modèle vidéo V1, que j’ai pu tester dans tous les sens. Je vous spoile un peu, c’est impressionnant mais pas encore prêt pour les Oscars.

\n

Midjourney a toujours été la référence en génération d’images IA, mais l’équipe avait une vision bien plus large dès le départ. Leur objectif ultime ? Créer des systèmes capables de simuler des mondes 3D en temps réel où vous pourrez vous déplacer, interagir et faire évoluer l’environnement à la volée. Pour y arriver, ils construisent méthodiquement leurs building blocks d’abord les images (ça c’est fait), puis les faire bouger (c’est fait aussi), ensuite la 3D, et enfin le temps réel.

", + "category": "developpement", + "link": "https://korben.info/midjourney-video-model-v1-ia-images-animees.html", + "creator": "Korben", + "pubDate": "Thu, 19 Jun 2025 07:33:24 +0200", + "enclosure": "", + "enclosureType": "", + "image": "", + "id": "", + "language": "fr", + "folder": "", + "feed": "korben.info", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "8bdd83826dd5774bbb36d726cf980594", + "highlights": [] + }, + { + "title": "Stablecoins régulés - Le Sénat US vote le GENIUS Act", + "description": "

Là où les banques centrales mettaient autrefois des décennies à s’entendre sur les devises, les Américains n’ont mis que 6 mois pour s’entendre sur une régulation concernant les Stablecoins. En effet, le GENIUS Act vient d’être voté et je pense que ça va secouer plus fort qu’un bear market.

\n

68 voix contre 30, c’est donc le score du vote historique du 17 juin 2025 au Sénat américain et pour la première fois dans l’histoire des États-Unis, les stablecoins ont désormais un cadre légal fédéral officiel. Ça représente quand même de plus de 150 milliards de dollars de tokens qui passent du statut de “zone grise réglementaire” à “légal et encadré”.

", + "content": "

Là où les banques centrales mettaient autrefois des décennies à s’entendre sur les devises, les Américains n’ont mis que 6 mois pour s’entendre sur une régulation concernant les Stablecoins. En effet, le GENIUS Act vient d’être voté et je pense que ça va secouer plus fort qu’un bear market.

\n

68 voix contre 30, c’est donc le score du vote historique du 17 juin 2025 au Sénat américain et pour la première fois dans l’histoire des États-Unis, les stablecoins ont désormais un cadre légal fédéral officiel. Ça représente quand même de plus de 150 milliards de dollars de tokens qui passent du statut de “zone grise réglementaire” à “légal et encadré”.

", + "category": "actualites-tech", + "link": "https://korben.info/stablecoins-regules-senat-us-genius-act-historique.html", + "creator": "Korben", + "pubDate": "Thu, 19 Jun 2025 06:06:19 +0200", + "enclosure": "", + "enclosureType": "", + "image": "", + "id": "", + "language": "fr", + "folder": "", + "feed": "korben.info", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "f3eedd39db259abbf0399e31b5f864c2", + "highlights": [] + }, + { + "title": "Tracker torrent mort - 3 millions de peers zombies", + "description": "

Ce qu’on ne vous dit pas sur BitTorrent, c’est que des millions de machines continuent de toquer aux portes de serveurs totalement morts depuis des années. Un développeur vient de racheter un domaine abandonné et a pu constater que le réseau P2P est bien plus massif que ce qu’on pourrait croire.

\n

Bon alors, rembobinons un peu. Le gars en question téléchargeait tranquillement ses ISO Linux (on va dire ça comme ça, hein ^^), et son client BitTorrent ramait sévère. Du coup, il ouvre l’onglet des trackers dans qBittorrent et là, surprise : la plupart des serveurs étaient complètement morts. Domaines expirés, serveurs down, le grand cimetière numérique habituel.

", + "content": "

Ce qu’on ne vous dit pas sur BitTorrent, c’est que des millions de machines continuent de toquer aux portes de serveurs totalement morts depuis des années. Un développeur vient de racheter un domaine abandonné et a pu constater que le réseau P2P est bien plus massif que ce qu’on pourrait croire.

\n

Bon alors, rembobinons un peu. Le gars en question téléchargeait tranquillement ses ISO Linux (on va dire ça comme ça, hein ^^), et son client BitTorrent ramait sévère. Du coup, il ouvre l’onglet des trackers dans qBittorrent et là, surprise : la plupart des serveurs étaient complètement morts. Domaines expirés, serveurs down, le grand cimetière numérique habituel.

", + "category": "actualites-tech", + "link": "https://korben.info/tracker-torrent-mort-3-millions-pairs.html", + "creator": "Korben", + "pubDate": "Wed, 18 Jun 2025 15:31:49 +0200", + "enclosure": "", + "enclosureType": "", + "image": "", + "id": "", + "language": "fr", + "folder": "", + "feed": "korben.info", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "1aa4e5d58f924fe8b1e81e51bba51b7c", + "highlights": [] + }, + { + "title": "MSIX - Le remplaçant du .exe qui renforce Windows", + "description": "

Combien d’entre vous ont déjà installé un programme Windows qui a foutu le bordel dans leur système ? Moi je lève la main et je suis certain que je ne suis pas le seul ! Heureusement, Microsoft a peut-être enfin trouvé la solution avec MSIX, un format qui vient remplacer nos bons vieux .exe.

\n

Arrêter de rigoler, je suis sérieux ^^!

\n

Depuis que je suis dans la tech, j’ai vu passer tous les formats d’installation possibles et imaginables. MSI, NSIS, Inno Setup, et j’en passe. Mais ça ne change rien… On finit toujours par formater son PC parce qu’une désinstallation a laissé des traces partout dans le registre. Et c’est avec l’expérience, qu’on apprend bien sûr à méfier de ces programmes qui promettent monts et merveilles mais qui au final polluent votre système.

", + "content": "

Combien d’entre vous ont déjà installé un programme Windows qui a foutu le bordel dans leur système ? Moi je lève la main et je suis certain que je ne suis pas le seul ! Heureusement, Microsoft a peut-être enfin trouvé la solution avec MSIX, un format qui vient remplacer nos bons vieux .exe.

\n

Arrêter de rigoler, je suis sérieux ^^!

\n

Depuis que je suis dans la tech, j’ai vu passer tous les formats d’installation possibles et imaginables. MSI, NSIS, Inno Setup, et j’en passe. Mais ça ne change rien… On finit toujours par formater son PC parce qu’une désinstallation a laissé des traces partout dans le registre. Et c’est avec l’expérience, qu’on apprend bien sûr à méfier de ces programmes qui promettent monts et merveilles mais qui au final polluent votre système.

", + "category": "outils-services", + "link": "https://korben.info/msix-remplacant-exe-windows-revolution.html", + "creator": "Korben", + "pubDate": "Wed, 18 Jun 2025 14:27:46 +0200", + "enclosure": "", + "enclosureType": "", + "image": "", + "id": "", + "language": "fr", + "folder": "", + "feed": "korben.info", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "9e6b5221236c055c4d3a9aa5462a6c21", + "highlights": [] + }, + { + "title": "Dnstwist - Pour détecter les typo squatteurs de votre nom de domaine", + "description": "

Neo dans Matrix avait le choix entre pilule rouge et bleue et nous, c’est entre fermer les yeux sur les domaines qui s’inspire des nôtres pour nous voler du trafic, ou ouvrir dnstwist et voir jusqu’où va le terrier du lapin.

\n

Après avoir testé plusieurs scanners anti-phishing qui promettent monts et merveilles, j’étais sceptique, mais dnstwist fait partie des rares qui tiennent leurs promesses. J’ai lancé un scan sur korben.info pour voir ce que ça donnait, et là… 434 permutations générées, 19 domaines déjà enregistrés ce qui est pas mal pour un site qui existe depuis des temps immémoriaux ^^.

", + "content": "

Neo dans Matrix avait le choix entre pilule rouge et bleue et nous, c’est entre fermer les yeux sur les domaines qui s’inspire des nôtres pour nous voler du trafic, ou ouvrir dnstwist et voir jusqu’où va le terrier du lapin.

\n

Après avoir testé plusieurs scanners anti-phishing qui promettent monts et merveilles, j’étais sceptique, mais dnstwist fait partie des rares qui tiennent leurs promesses. J’ai lancé un scan sur korben.info pour voir ce que ça donnait, et là… 434 permutations générées, 19 domaines déjà enregistrés ce qui est pas mal pour un site qui existe depuis des temps immémoriaux ^^.

", + "category": "developpement", + "link": "https://korben.info/dnstwist-detecteur-phishing-domaines.html", + "creator": "Korben", + "pubDate": "Wed, 18 Jun 2025 11:04:41 +0200", + "enclosure": "", + "enclosureType": "", + "image": "", + "id": "", + "language": "fr", + "folder": "", + "feed": "korben.info", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "ae4f5b83656234bdd4a0787d953855f1", + "highlights": [] + }, + { + "title": "SHADE-Arena - Quand les IA apprennent à nous saborder en douce", + "description": "

J’étais tranquillement en train de lire le dernier papier d’Anthropic avec mon café quand mon chat (Percy) m’a regardé avec son regard de psychopathe, semblant me demander pourquoi j’avais l’air de quelqu’un qui venait de voir un fantôme. La vraie raison, c’est que je viens de découvrir qu’Anthropic testait maintenant comment les IA pouvaient nous mentir en pleine face au travers de leur projet SHADE-Arena. Derrière ce nom un peu barbare se cache en réalité un laboratoire secret pour mesurer les capacités de sabotage de nos assistants virtuels préférés.

", + "content": "

J’étais tranquillement en train de lire le dernier papier d’Anthropic avec mon café quand mon chat (Percy) m’a regardé avec son regard de psychopathe, semblant me demander pourquoi j’avais l’air de quelqu’un qui venait de voir un fantôme. La vraie raison, c’est que je viens de découvrir qu’Anthropic testait maintenant comment les IA pouvaient nous mentir en pleine face au travers de leur projet SHADE-Arena. Derrière ce nom un peu barbare se cache en réalité un laboratoire secret pour mesurer les capacités de sabotage de nos assistants virtuels préférés.

", + "category": "outils-services", + "link": "https://korben.info/shade-arena-anthropic-sabotage-ia.html", + "creator": "Korben", + "pubDate": "Wed, 18 Jun 2025 09:02:27 +0200", + "enclosure": "", + "enclosureType": "", + "image": "", + "id": "", + "language": "fr", + "folder": "", + "feed": "korben.info", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "a8e6255882b37b42e9914b5c7e0c1a32", + "highlights": [] + }, + { + "title": "Anthropic Cookbook - Claude devient encore plus accessible aux devs", + "description": "

À l’époque, quand on voulait faire de l’IA, fallait un doctorat et 6 mois pour comprendre TensorFlow. Et aujourd’hui ? C’est Claude qui devient enfin accessible au commun des mortels (au moins pour les mortels un peu dev ^^).

\n

L’Anthropic Cookbook, qu’est-ce que c’est exactement ? Eh bien imaginez un bouquin de recettes, mais au lieu de faire des crêpes, vous y apprendrez à transformer Claude en assistant développeur. C’est une collection officielle de notebooks Jupyter qui vous montre comment exploiter Claude dans vos projets sans vous arracher les cheveux.

", + "content": "

À l’époque, quand on voulait faire de l’IA, fallait un doctorat et 6 mois pour comprendre TensorFlow. Et aujourd’hui ? C’est Claude qui devient enfin accessible au commun des mortels (au moins pour les mortels un peu dev ^^).

\n

L’Anthropic Cookbook, qu’est-ce que c’est exactement ? Eh bien imaginez un bouquin de recettes, mais au lieu de faire des crêpes, vous y apprendrez à transformer Claude en assistant développeur. C’est une collection officielle de notebooks Jupyter qui vous montre comment exploiter Claude dans vos projets sans vous arracher les cheveux.

", + "category": "developpement", + "link": "https://korben.info/anthropic-cookbook-claude-accessible-developpeurs.html", + "creator": "Korben", + "pubDate": "Wed, 18 Jun 2025 07:36:53 +0200", + "enclosure": "", + "enclosureType": "", + "image": "", + "id": "", + "language": "fr", + "folder": "", + "feed": "korben.info", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "56883f8512938977d5313e6d56b7da86", + "highlights": [] + }, + { + "title": "Torserv - Le serveur web anonyme qui fait tout le boulot", + "description": "

Vos serveurs web classiques, les censeurs les trouvent en 3 clics. Même avec un VPN foireux, même caché derrière CloudFlare, même en priant très fort, alors imaginez que vous voulez publier un truc sur le web sans que personne ne puisse remonter jusqu’à vous ? Et bien en fait c’est hyper simple avec torserv qui lance automatiquement votre site comme service caché Tor.

\n

Il s’agit d’un serveur web statique durci qui intègre nativement Tor. Pas de base de données MySQL qui traîne, pas de PHP qui fuite, juste vos fichiers HTML, CSS et JavaScript servis proprement. Le truc génial, c’est la configuration zéro. Vous lancez le binaire et hop, votre site devient accessible via une adresse .onion automatiquement générée.

", + "content": "

Vos serveurs web classiques, les censeurs les trouvent en 3 clics. Même avec un VPN foireux, même caché derrière CloudFlare, même en priant très fort, alors imaginez que vous voulez publier un truc sur le web sans que personne ne puisse remonter jusqu’à vous ? Et bien en fait c’est hyper simple avec torserv qui lance automatiquement votre site comme service caché Tor.

\n

Il s’agit d’un serveur web statique durci qui intègre nativement Tor. Pas de base de données MySQL qui traîne, pas de PHP qui fuite, juste vos fichiers HTML, CSS et JavaScript servis proprement. Le truc génial, c’est la configuration zéro. Vous lancez le binaire et hop, votre site devient accessible via une adresse .onion automatiquement générée.

", + "category": "securite-vie-privee", + "link": "https://korben.info/torserv-serveur-web-anonyme-tor-zero-config.html", + "creator": "Korben", + "pubDate": "Wed, 18 Jun 2025 06:25:57 +0200", + "enclosure": "", + "enclosureType": "", + "image": "", + "id": "", + "language": "fr", + "folder": "", + "feed": "korben.info", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "a97fb114557cdfdc6d1b467c7ee3ffc7", + "highlights": [] + }, + { + "title": "Nintendo Switch 2 - Vos sauvegardes peuvent détruire votre console", + "description": "

Vous pensiez que sauvegarder vos propres jeux était légal ? Que nenni car Nintendo a décidé que non, et maintenant ils peuvent carrément détruire votre Switch 2 à distance si vous utilisez une MIG flash cart.

\n

Bienvenue dans le nouvelle ère du gaming totalitaire les amis ! Nintendo vient en effet de franchir une ligne rouge que même Sony ou Microsoft n’avaient jamais osé traverser : ils se donnent maintenant le droit de bricker définitivement votre console si vous ne respectez pas leurs règles. Et le plus fou dans l’histoire, c’est qu’ils peuvent le faire même si vous ne faites que jouer à VOS PROPRES jeux sauvegardés.

", + "content": "

Vous pensiez que sauvegarder vos propres jeux était légal ? Que nenni car Nintendo a décidé que non, et maintenant ils peuvent carrément détruire votre Switch 2 à distance si vous utilisez une MIG flash cart.

\n

Bienvenue dans le nouvelle ère du gaming totalitaire les amis ! Nintendo vient en effet de franchir une ligne rouge que même Sony ou Microsoft n’avaient jamais osé traverser : ils se donnent maintenant le droit de bricker définitivement votre console si vous ne respectez pas leurs règles. Et le plus fou dans l’histoire, c’est qu’ils peuvent le faire même si vous ne faites que jouer à VOS PROPRES jeux sauvegardés.

", + "category": "culture-geek", + "link": "https://korben.info/nintendo-switch-2-mig-flash-brick-sauvegarde.html", + "creator": "Korben", + "pubDate": "Wed, 18 Jun 2025 00:12:00 +0200", + "enclosure": "", + "enclosureType": "", + "image": "", + "id": "", + "language": "fr", + "folder": "", + "feed": "korben.info", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "05e4ba480d5fe9cbd4878405d4cc06c2", + "highlights": [] + }, + { + "title": "OpenAI passe du côté obscur - 200M$ pour militariser ChatGPT", + "description": "

200 millions de dollars pour apprendre à ChatGPT à jouer à Call of Duty en vrai. Non, c’est pas une blague, OpenAI vient de signer avec le Pentagone et franchement, j’ai comme un petit goût amer en bouche.

\n

Alors voilà les faits bruts, parce que c’est toujours mieux de partir de là. Le 16 juin 2025, le Pentagone a officialisé un contrat d’un an avec OpenAI pour 200 millions de dollars. L’objectif ? Développer des “capacités IA de pointe” pour répondre aux défis de sécurité nationale, et ça couvre autant les applications administratives que les trucs de combat pur et dur. On parle de cyberdéfense proactive, d’optimisation des soins de santé pour les militaires, d’analyse de données d’acquisition… Bref, du sérieux.

", + "content": "

200 millions de dollars pour apprendre à ChatGPT à jouer à Call of Duty en vrai. Non, c’est pas une blague, OpenAI vient de signer avec le Pentagone et franchement, j’ai comme un petit goût amer en bouche.

\n

Alors voilà les faits bruts, parce que c’est toujours mieux de partir de là. Le 16 juin 2025, le Pentagone a officialisé un contrat d’un an avec OpenAI pour 200 millions de dollars. L’objectif ? Développer des “capacités IA de pointe” pour répondre aux défis de sécurité nationale, et ça couvre autant les applications administratives que les trucs de combat pur et dur. On parle de cyberdéfense proactive, d’optimisation des soins de santé pour les militaires, d’analyse de données d’acquisition… Bref, du sérieux.

", + "category": "developpement", + "link": "https://korben.info/openai-pentagone-contrat-militaire.html", + "creator": "Korben", + "pubDate": "Tue, 17 Jun 2025 23:53:13 +0200", + "enclosure": "", + "enclosureType": "", + "image": "", + "id": "", + "language": "fr", + "folder": "", + "feed": "korben.info", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "ee00563515caaba46e443ba7a4c6028d", + "highlights": [] + }, + { + "title": "Test complet du STRONG Leap S3+, une box Google TV efficace à petit prix", + "description": "

– Article invité, rédigé par Vincent Lautier, contient des liens affiliés Amazon –

\n

Quand on apprécie la qualité de l’Apple TV, on sait aussi reconnaître que pour une télévision secondaire sans applications intégrées, c’est clairement trop cher et trop sophistiqué. Dans mon cas, je cherchais une solution simple et rapide, capable de faire tourner Netflix, Prime Video, Disney+, mais surtout Molotov et Plex. Mission accomplie pour le STRONG Leap S3+, vendu sous la barre des 60 €, et souvent en promotion sur Amazon

", + "content": "

– Article invité, rédigé par Vincent Lautier, contient des liens affiliés Amazon –

\n

Quand on apprécie la qualité de l’Apple TV, on sait aussi reconnaître que pour une télévision secondaire sans applications intégrées, c’est clairement trop cher et trop sophistiqué. Dans mon cas, je cherchais une solution simple et rapide, capable de faire tourner Netflix, Prime Video, Disney+, mais surtout Molotov et Plex. Mission accomplie pour le STRONG Leap S3+, vendu sous la barre des 60 €, et souvent en promotion sur Amazon

", + "category": "systemes-materiel", + "link": "https://korben.info/test-complet-du-strong-leap-s3-une-box-google-tv-efficace-a-petit-prix.html", + "creator": "Korben", + "pubDate": "Tue, 17 Jun 2025 10:54:41 +0200", + "enclosure": "", + "enclosureType": "", + "image": "", + "id": "", + "language": "fr", + "folder": "", + "feed": "korben.info", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "d00dacb392a5134e91ca8917f83c6775", + "highlights": [] + }, + { + "title": "ASIF - Le format révolutionnaire d'Apple (MacOS Tahoe) pour vos VM", + "description": "

Vous en avez marre d’attendre 3 plombes que votre VM Windows copie un fichier ? Vous regardez l’indicateur de progression en vous demandant si c’est votre SSD de 2 To qui a soudainement décidé de se transformer en disquette ? Alors Apple a enfin entendu vos cris de désespoir avec ASIF dans macOS Tahoe.

\n

Pour ceux qui bossent avec des machines virtuelles, c’est le drame quotidien. Vous avez beau avoir un Mac Studio M2 Ultra avec un SSD qui crache 7 GB/s, dès que vous lancez une VM Linux ou Windows, c’est la cata. Les images disque UDSP (Sparse Image) plafonnent lamentablement à 100 MB/s, voire 0,1 GB/s quand elles sont chiffrées. Autant dire que votre SSD NVMe de compète se transforme en disque dur des années 90.

", + "content": "

Vous en avez marre d’attendre 3 plombes que votre VM Windows copie un fichier ? Vous regardez l’indicateur de progression en vous demandant si c’est votre SSD de 2 To qui a soudainement décidé de se transformer en disquette ? Alors Apple a enfin entendu vos cris de désespoir avec ASIF dans macOS Tahoe.

\n

Pour ceux qui bossent avec des machines virtuelles, c’est le drame quotidien. Vous avez beau avoir un Mac Studio M2 Ultra avec un SSD qui crache 7 GB/s, dès que vous lancez une VM Linux ou Windows, c’est la cata. Les images disque UDSP (Sparse Image) plafonnent lamentablement à 100 MB/s, voire 0,1 GB/s quand elles sont chiffrées. Autant dire que votre SSD NVMe de compète se transforme en disque dur des années 90.

", + "category": "systemes-materiel", + "link": "https://korben.info/apple-asif-format-disque-tahoe-performances.html", + "creator": "Korben", + "pubDate": "Tue, 17 Jun 2025 10:42:43 +0200", + "enclosure": "", + "enclosureType": "", + "image": "", + "id": "", + "language": "fr", + "folder": "", + "feed": "korben.info", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "2527921b10ead40f16c6dc6914b3f2fe", + "highlights": [] + }, + { + "title": "Tritium - Un éditeur de texte en Rust pour les avocats", + "description": "

Et si on appliquait la philosophie des IDE de développement aux outils juridiques ?

\n

C’est exactement ce qu’a fait Drew Miller avec Tritium, un éditeur de texte écrit en Rust qui traite les documents juridiques comme des projets de code, avec annotation automatique, du redlining intégré et des performances à 60 FPS.

\n

Drew Miller n’est pas n’importe qui dans cette histoire. Cet ancien avocat corporatiste chez Schulte Roth & Zabel à Londres a passé plus de 10 ans à jongler entre droit transactionnel et développement logiciel. En août 2024, il franchit le pas et lance Tritium Legal Technologies avec une vision claire : révolutionner le traitement de texte pour les avocats d’affaires. Son constat c’est que les outils actuels, Word en tête, sont des bloatwares qui sont utilisés depuis 40 ans sans répondre aux besoins spécifiques du secteur juridique.

", + "content": "

Et si on appliquait la philosophie des IDE de développement aux outils juridiques ?

\n

C’est exactement ce qu’a fait Drew Miller avec Tritium, un éditeur de texte écrit en Rust qui traite les documents juridiques comme des projets de code, avec annotation automatique, du redlining intégré et des performances à 60 FPS.

\n

Drew Miller n’est pas n’importe qui dans cette histoire. Cet ancien avocat corporatiste chez Schulte Roth & Zabel à Londres a passé plus de 10 ans à jongler entre droit transactionnel et développement logiciel. En août 2024, il franchit le pas et lance Tritium Legal Technologies avec une vision claire : révolutionner le traitement de texte pour les avocats d’affaires. Son constat c’est que les outils actuels, Word en tête, sont des bloatwares qui sont utilisés depuis 40 ans sans répondre aux besoins spécifiques du secteur juridique.

", + "category": "outils-services", + "link": "https://korben.info/tritium-processeur-texte-avocats-rust.html", + "creator": "Korben", + "pubDate": "Tue, 17 Jun 2025 06:42:38 +0200", + "enclosure": "", + "enclosureType": "", + "image": "", + "id": "", + "language": "fr", + "folder": "", + "feed": "korben.info", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "2a935c2e179030dcead212b0aebb7712", + "highlights": [] + }, + { + "title": "NormCap - Un OCR gratuit pour capturer directement le texte", + "description": "

Vous aussi vous avez déjà passé 10 minutes à retaper ligne par ligne un bout de code trouvé dans un screenshot de Stack Overflow ?

\n

Félicitations, vous faites partie du club des masochistes cyber-digiteaux ! Heureusement, je vous ai trouvé le remède miracle : NormCap, un petit outil qui capture directement le texte au lieu de faire des images inutiles.

\n

Au lieu de prendre une capture d’écran classique qui vous donnera une image à stocker quelque part, NormCap utilise l’OCR (reconnaissance optique de caractères) pour extraire directement le texte. Vous sélectionnez une zone de votre écran, et hop, le texte se retrouve dans votre presse-papiers, prêt à être collé où vous voulez.

", + "content": "

Vous aussi vous avez déjà passé 10 minutes à retaper ligne par ligne un bout de code trouvé dans un screenshot de Stack Overflow ?

\n

Félicitations, vous faites partie du club des masochistes cyber-digiteaux ! Heureusement, je vous ai trouvé le remède miracle : NormCap, un petit outil qui capture directement le texte au lieu de faire des images inutiles.

\n

Au lieu de prendre une capture d’écran classique qui vous donnera une image à stocker quelque part, NormCap utilise l’OCR (reconnaissance optique de caractères) pour extraire directement le texte. Vous sélectionnez une zone de votre écran, et hop, le texte se retrouve dans votre presse-papiers, prêt à être collé où vous voulez.

", + "category": "outils-services", + "link": "https://korben.info/normcap-ocr-gratuit-capture-texte-directement.html", + "creator": "Korben", + "pubDate": "Mon, 16 Jun 2025 09:30:13 +0200", + "enclosure": "", + "enclosureType": "", + "image": "", + "id": "", + "language": "fr", + "folder": "", + "feed": "korben.info", + "read": false, + "favorite": false, + "created": false, + "tags": [], + "hash": "b9a2301ded855dcbab8af5431c4ad3c0", + "highlights": [] + }, { "title": "LiteLLM – Pour discuter avec toutes les API LLM en utilisant la syntaxe OpenAI", "description": "LiteLLM est une bibliothèque Python qui simplifie l'interaction avec diverses API de modèles de langage (LLM) en utilisant le format de l'API OpenAI. Elle permet l'utilisation de fonctions telles que la génération de texte et la traduction. L'installation se fait via `pip install litellm`, et son utilisation nécessite de définir des variables d'environnement et de créer un objet LiteLLM. LiteLLM supporte également un proxy pour rediriger les requêtes vers le modèle souhaité et offre des fonctionnalités supplémentaires comme le streaming, la gestion des exceptions, et le suivi des coûts. Plus d'informations sont disponibles sur la page GitHub de LiteLLM.", diff --git a/.obsidian/plugins/surfing/data.json b/.obsidian/plugins/surfing/data.json deleted file mode 100644 index 58926284..00000000 --- a/.obsidian/plugins/surfing/data.json +++ /dev/null @@ -1,65 +0,0 @@ -{ - "defaultSearchEngine": "duckduckgo", - "customSearchEngine": [ - { - "name": "duckduckgo", - "url": "https://duckduckgo.com/?q=" - } - ], - "hoverPopover": true, - "ignoreList": [ - "notion", - "craft" - ], - "alwaysShowCustomSearch": false, - "showOtherSearchEngines": false, - "showSearchBarInPage": false, - "markdownPath": "/", - "customHighlightFormat": false, - "highlightFormat": "[{CONTENT}]({URL})", - "highlightInSameTab": false, - "openInSameTab": false, - "showRefreshButton": false, - "openInObsidianWeb": false, - "useCustomIcons": false, - "useWebview": false, - "useIconList": true, - "darkMode": true, - "randomBackground": false, - "lastOpenedFiles": false, - "bookmarkManager": { - "openBookMark": false, - "saveBookMark": false, - "sendToReadWise": false, - "pagination": "12", - "category": "- Computer\n - 算法\n - 数据结构\n- obsidian\n - surfing\n - dataview\n", - "defaultCategory": "ROOT", - "defaultColumnList": [ - "name", - "description", - "url", - "category", - "tags", - "created", - "modified", - "action" - ], - "defaultFilterType": "tree" - }, - "treeData": [ - { - "id": "e6ebe14ad3720265", - "parent": 0, - "droppable": true, - "text": "0SK42 - comment mieux apprendre", - "data": { - "fileType": "site", - "fileSize": "", - "icon": {} - } - } - ], - "enableHtmlPreview": true, - "supportLivePreviewInlineUrl": false, - "enableTreeView": true -} \ No newline at end of file diff --git a/.obsidian/plugins/surfing/main.js b/.obsidian/plugins/surfing/main.js deleted file mode 100644 index 96281e4f..00000000 --- a/.obsidian/plugins/surfing/main.js +++ /dev/null @@ -1,664 +0,0 @@ -"use strict";var $D=Object.defineProperty;var ID=(e,t,n)=>t in e?$D(e,t,{enumerable:!0,configurable:!0,writable:!0,value:n}):e[t]=n;var Ce=(e,t,n)=>ID(e,typeof t!="symbol"?t+"":t,n);const ke=require("obsidian"),Un=require("electron"),TD=require("@codemirror/view");function v2(e,t){for(var n=0;nr[o]})}}}return Object.freeze(Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}))}const PD={},MD={},ND={},RD={},h2={"Search with":" Search with ","or enter address":" or enter address","Default Search Engine":"Default Search Engine","Set Custom Search Engine Url":"Set Custom Search Engine Url","Set custom search engine url for yourself. 'Duckduckgo' By default":"Set custom search engine url for yourself. 'Duckduckgo' By default","Custom Link to Highlight Format":"Custom Link to Highlight Format","Copy Link to Highlight Format":"Copy Link to Highlight Format","Set copy link to text fragment format. [{CONTENT}]({URL}) By default. You can also set {TIME:YYYY-MM-DD HH:mm:ss} to get the current date.":"Set copy link to text fragment format. [{CONTENT}]({URL}) By default. You can also set {TIME:YYYY-MM-DD HH:mm:ss} to get the current date.","Open URL In Same Tab":"Open In Same Tab",Custom:"Custom",Baidu:"Baidu",Yahoo:"Yahoo",Bing:"Bing",Google:"Google",DuckDuckGo:"DuckDuckGo","Toggle Same Tab In Web Browser":"Toggle Same Tab In Web Browser","Clear Current Page History":"Clear Current Page History","Open Current URL In External Browser":"Open Current URL In External Browser","Search Text":"Search Text","Copy Plain Text":"Copy Plain Text","Copy Link to Highlight":"Copy Link to Highlight","Copy Video Timestamp":"Copy Video Time","Open URL In Obsidian Web From Other Software":"Open URL In Obsidian Web From Other Software","(Reload to take effect)":"(Reload to take effect)","Copy BookmarkLets Success":"Copy BookmarkLets Success","Refresh Current Page":"Refresh Current Page","Show Search Bar In Empty Page":"Show Search Bar In Empty Page","You enabled obsidian-web-browser plugin, please disable it/disable surfing to avoid conflict.":"You enabled obsidian-web-browser plugin, please disable it/disable Surfing to avoid conflict.","You didn't enable show tab title bar in apperance settings, please enable it to use surfing happily.":"You didn't enable show tab header in apperance settings, please enable it to use Surfing happily.","Get Current Timestamp from Web Browser":"Get Current Timestamp from Web Browser","Search In Current Page Title Bar":"Search In Current Page Title Bar"," <- Drag or click on me":" <- Drag or click on me",Name:"Name",Url:"Url","Custom Search":"Custom Search","Delete Custom Search":"Delete Custom Search","Add new custom search engine":"Add new custom search engine","Search all settings":"Search all settings",General:"General",Search:"Search",Bookmark:"Bookmark",Theme:"Theme","Always Show Custom Engines":"Always Show Custom Engines","Save Current Page As Markdown":"Save Current Page As Markdown","Save As Markdown Path":"Save As Markdown Path","Path like /_Tempcard":"Path like /_Tempcard","Search Engine":"Search Engine",settings:"settings","Using ":"Using "," to search":" to search","Surfing Iframe":"Surfing Iframe","Surfing is using iframe to prevent crashed when loading some websites.":"Surfing is using iframe to prevent crashed when loading some websites.","Open With External Browser":"Open With External Browser","Open With Surfing":"Open With Surfing","When click on the URL from same domain name in the note, jump to the same surfing view rather than opening a new Surfing view.":"When click on the URL from same domain name in the note, jump to the same surfing view rather than opening a new Surfing view.","Jump to Opened Page":"Jump to Opened Page","Open Quick Switcher":"Open Quick Switcher | Ctrl/CMD+O","Close Current Leaf":"Close Current Leaf | Ctrl/CMD+W","Create A New Note":"Create A New Note | Ctrl/CMD+N","Show Other Search Engines When Searching":"Show Other Search Engines When Searching","Random Icons From Default Art":"Random Icons From Default Art","Working On, Not Available Now":"Working On, Not Available Now","Toggle Dark Mode":"Toggle Dark Mode","[Experimental] Replace Iframe In Canvas":"[Experimental] Replace Iframe In Canvas","Use icon list to replace defult text actions in empty view":"Use icon list to replace defult text actions in empty view","Open BookmarkBar & Bookmark Manager":"Open BookmarkBar & Bookmark Manager",Pagination:"Pagination",Category:"Category","Default Column List":"Default Column List","Show Refresh Button Near Search Bar":"Show Refresh Button Near Search Bar","Focus On Current Search Bar":"Focus On Current Search Bar","Default Category Filter Type":"Default Filter Type",Tree:"Tree",Menu:"Menu",Description:"Description",Tags:"Tags",Created:"Created",Modified:"Modified",Action:"Action","Search from ":"Search from "," bookmarks":" bookmarks","Save Bookmark When Open URI":"Save Bookmark When Open URI","Copy Current Viewport As Image":"Copy Current Viewport As Image",Back:"Back",Forward:"Forward",star:"star","Copy failed, you may focus on surfing view, click the title bar, and try again.":"Copy failed, you may focus on surfing view, click the title bar, and try again.","Default Category (Use , to split)":"Default Category (Use , to split)","Send to ReadWise":"Send to ReadWise","Add a action in page header to Send to ReadWise.":"Add a action in page header to Send to ReadWise.","Disable / to search when on these sites":"Disable / to search when on these sites","Focus search bar via keyboard":"Focus search bar via keyboard","Hover Popover":"Hover Popover","Show a popover when hover on the link.":"Show a popover when hover on the link.","Enable HTML Preview":"Enable HTML Preview","Enable HTML Preview in Surfing":"Enable HTML Preview in Surfing","Show original url":"Show original url","Enable Inline Preview":"Enable Inline Preview","Enable inline preview with surfing. Currently only support Live preview":"Enable inline preview with surfing. Currently only support Live preview","Enable Tree View in Surfing":"Enable Tree View in Surfing","Enable Tree View":"Enable Tree View"},DD={},jD={},LD={},BD={},AD={},zD={},HD={},FD={},_D={},VD={},WD={},UD={},KD={},qD={},XD={},GD={},YD={"Search with":"使用 ","or enter address":" 搜索,或输入地址","Default Search Engine":"默认搜索引擎","Set Custom Search Engine Url":"设置自定义搜索引擎网址","Set custom search engine url for yourself. 'Duckduckgo' By default":"设置自定义搜索引擎网址。默认为'Duckduckgo'","Custom Link to Highlight Format":"自定义指向突出显示的链接的格式","Copy Link to Highlight Format":"复制指向突出显示的链接的格式","Set copy link to text fragment format. [{CONTENT}]({URL}) By default. You can also set {TIME:YYYY-MM-DD HH:mm:ss} to get the current date.":"设置复制文本片段的链接的格式。默认为[{CONTENT}]({URL})。你也可以设置{TIME:YYYY-MM-DD HH:mm:ss}来获取当前日期时间。","Open URL In Same Tab":"在固定且唯一标签页中打开网页",Custom:"自定义",Baidu:"百度",Yahoo:"雅虎",Bing:"必应",Google:"谷歌",DuckDuckGo:"DuckDuckGo","Toggle Same Tab In Web Browser":"切换是否在固定且唯一标签页访问网址","Clear Current Page History":"清除当前页面的历史记录","Open Current URL In External Browser":"在外部浏览器中打开当前网址","Search Text":"搜索文本","Copy Plain Text":"复制纯文本","Copy Link to Highlight":"复制指向突出显示的链接","Copy Video Timestamp":"复制视频时间戳","Open URL In Obsidian Web From Other Software":"从别的软件在 Obsidian Web 中打开网址","(Reload to take effect)":"(重启 Ob 以生效)","Copy BookmarkLets Success":"复制 BookmarkLets 成功","Refresh Current Page":"刷新当前页面","Show Search Bar In Empty Page":"在空白页面中显示搜索栏","You enabled obsidian-web-browser plugin, please disable it/disable surfing to avoid conflict.":"你启用了 obsidian-web-browser 插件,请禁用它或禁用 surfing 插件以避免冲突。","You didn't enable show tab title bar in apperance settings, please enable it to use surfing happily.":"你没有在外观设置中启用显示标签页标题,请启用它以便使用 surfing。","Get Current Timestamp from Web Browser":"从浏览器获取当前时间戳","Search In Current Page Title Bar":"在当前页面标题栏中搜索"," <- Drag or click on me":" <- 拖动或点击",Name:"名称",Url:"链接","Custom Search":"自定义搜索","Delete Custom Search":"删除自定义","Add new custom search engine":"添加新的自定义搜索引擎","Search all settings":"搜索设置",General:"常规选项",Search:"搜索选项",Theme:"主题选项",Bookmark:"书签选项","Always Show Custom Engines":"始终显示自定义引擎","Save Current Page As Markdown":"保存当前网页为 Markdown","Save As Markdown Path":"保存为 Markdown 路径","Path like /_Tempcard":"路径例如 /_Tempcard","Search Engine":"搜索引擎",settings:"设置","Using ":"使用"," to search":"来检索","Surfing Iframe":"Surfing Iframe","Surfing is using iframe to prevent crashed when loading some websites.":"Surfing 使用 iframe 来防止加载某些网站时崩溃。","Open With External Browser":"在外部浏览器中打开","Open With Surfing":"在 Surfing 中打开","When click on the URL from same domain name in the note, jump to the same surfing view rather than opening a new Surfing view.":"当在笔记中点击相同域名的 URL 时,跳转到相同的 Surfing 视图而不是打开新的 Surfing 视图。","Jump to Opened Page":"跳转到已打开的页面","Open Quick Switcher":"打开快速切换 | Ctrl/CMD+O","Close Current Leaf":"关闭当前的页面 | Ctrl/CMD+W","Create A New Note":"新建笔记 | Ctrl/CMD+N","Show Other Search Engines When Searching":"搜索时显示其它搜索引擎","Random Icons From Default Art":"从默认的 Art 中挑选随机 Icon","Working On, Not Available Now":"正在建设中,当前不可用","Toggle Dark Mode":"切换夜间模式","[Experimental] Replace Iframe In Canvas":"【实验性功能】在 Canvas 中替换网页节点","Use icon list to replace defult text actions in empty view":"使用图标列替换空页面中的默认文本操作","Open BookmarkBar & Bookmark Manager":"打开书签栏和书签管理器",Pagination:"分页书签数",Category:"分类","Default Column List":"默认的列","Show Refresh Button Near Search Bar":"在搜索栏旁边显示刷新按钮","Focus On Current Search Bar":"聚焦到当前的搜索栏","Default Category Filter Type":"默认目录过滤形式",Tree:"树状",Menu:"菜单",Description:"描述",Tags:"标签",Created:"创建时间",Modified:"修改时间",Action:"操作","Search from ":"从"," bookmarks":"个书签中搜索","Save Bookmark When Open URI":"打开 URI 时保存书签","Copy Current Viewport As Image":"复制当前页面为图片",Back:"返回",Forward:"前进",star:"星标","Copy failed, you may focus on surfing view, click the title bar, and try again.":"复制失败,你可能聚焦到了 Surfing 视图,点击标题栏,然后再试一次。","Default Category (Use , to split)":"默认分类 (用,分层)","Send to ReadWise":"发送到 ReadWise","Add a action in page header to Send to ReadWise.":"在页面标题栏中添加一个动作来发送到 ReadWise。","Disable / to search when on these sites":"当在这些网站中禁止按 / 来搜索的功能","Focus search bar via keyboard":"通过键盘聚焦到搜索栏","Show a popover when hover on the link.":"当鼠标悬停在链接上时显示一个弹出窗口。","Hover Popover":"悬停弹出窗口","Show original url":"显示原始链接","Enable Inline Preview":"启用内联预览","Enable inline preview with surfing. Currently only support Live preview":"启用 Surfing 的内联预览。目前仅支持实时预览","Enable HTML Preview":"启用 HTML 预览","Enable HTML Preview in Surfing":"在 Surfing 中启用 HTML 预览"},QD={},ZD={ar:PD,cs:MD,da:ND,de:RD,en:h2,"en-gb":DD,es:jD,fr:LD,hi:BD,id:AD,it:zD,ja:HD,ko:FD,nl:_D,nn:VD,pl:WD,pt:UD,"pt-br":KD,ro:qD,ru:XD,tr:GD,"zh-cn":YD,"zh-tw":QD},ZS=ZD[ke.moment.locale()];function je(e){return ZS&&ZS[e]||h2[e]}var ao="top",Vo="bottom",Wo="right",so="left",Oy="auto",hd=[ao,Vo,Wo,so],Al="start",Au="end",JD="clippingParents",g2="viewport",Gc="popper",ej="reference",JS=hd.reduce(function(e,t){return e.concat([t+"-"+Al,t+"-"+Au])},[]),m2=[].concat(hd,[Oy]).reduce(function(e,t){return e.concat([t,t+"-"+Al,t+"-"+Au])},[]),tj="beforeRead",nj="read",rj="afterRead",oj="beforeMain",ij="main",aj="afterMain",sj="beforeWrite",lj="write",cj="afterWrite",uj=[tj,nj,rj,oj,ij,aj,sj,lj,cj];function Ai(e){return e?(e.nodeName||"").toLowerCase():null}function xo(e){if(e==null)return window;if(e.toString()!=="[object Window]"){var t=e.ownerDocument;return t&&t.defaultView||window}return e}function Os(e){var t=xo(e).Element;return e instanceof t||e instanceof Element}function _o(e){var t=xo(e).HTMLElement;return e instanceof t||e instanceof HTMLElement}function $y(e){if(typeof ShadowRoot>"u")return!1;var t=xo(e).ShadowRoot;return e instanceof t||e instanceof ShadowRoot}function dj(e){var t=e.state;Object.keys(t.elements).forEach(function(n){var r=t.styles[n]||{},o=t.attributes[n]||{},i=t.elements[n];!_o(i)||!Ai(i)||(Object.assign(i.style,r),Object.keys(o).forEach(function(a){var s=o[a];s===!1?i.removeAttribute(a):i.setAttribute(a,s===!0?"":s)}))})}function fj(e){var t=e.state,n={popper:{position:t.options.strategy,left:"0",top:"0",margin:"0"},arrow:{position:"absolute"},reference:{}};return Object.assign(t.elements.popper.style,n.popper),t.styles=n,t.elements.arrow&&Object.assign(t.elements.arrow.style,n.arrow),function(){Object.keys(t.elements).forEach(function(r){var o=t.elements[r],i=t.attributes[r]||{},a=Object.keys(t.styles.hasOwnProperty(r)?t.styles[r]:n[r]),s=a.reduce(function(c,u){return c[u]="",c},{});!_o(o)||!Ai(o)||(Object.assign(o.style,s),Object.keys(i).forEach(function(c){o.removeAttribute(c)}))})}}const pj={name:"applyStyles",enabled:!0,phase:"write",fn:dj,effect:fj,requires:["computeStyles"]};function Ni(e){return e.split("-")[0]}var Ss=Math.max,_p=Math.min,zl=Math.round;function $0(){var e=navigator.userAgentData;return e!=null&&e.brands&&Array.isArray(e.brands)?e.brands.map(function(t){return t.brand+"/"+t.version}).join(" "):navigator.userAgent}function b2(){return!/^((?!chrome|android).)*safari/i.test($0())}function Hl(e,t,n){t===void 0&&(t=!1),n===void 0&&(n=!1);var r=e.getBoundingClientRect(),o=1,i=1;t&&_o(e)&&(o=e.offsetWidth>0&&zl(r.width)/e.offsetWidth||1,i=e.offsetHeight>0&&zl(r.height)/e.offsetHeight||1);var a=Os(e)?xo(e):window,s=a.visualViewport,c=!b2()&&n,u=(r.left+(c&&s?s.offsetLeft:0))/o,p=(r.top+(c&&s?s.offsetTop:0))/i,v=r.width/o,h=r.height/i;return{width:v,height:h,top:p,right:u+v,bottom:p+h,left:u,x:u,y:p}}function Iy(e){var t=Hl(e),n=e.offsetWidth,r=e.offsetHeight;return Math.abs(t.width-n)<=1&&(n=t.width),Math.abs(t.height-r)<=1&&(r=t.height),{x:e.offsetLeft,y:e.offsetTop,width:n,height:r}}function y2(e,t){var n=t.getRootNode&&t.getRootNode();if(e.contains(t))return!0;if(n&&$y(n)){var r=t;do{if(r&&e.isSameNode(r))return!0;r=r.parentNode||r.host}while(r)}return!1}function na(e){return xo(e).getComputedStyle(e)}function vj(e){return["table","td","th"].indexOf(Ai(e))>=0}function Qa(e){return((Os(e)?e.ownerDocument:e.document)||window.document).documentElement}function Cv(e){return Ai(e)==="html"?e:e.assignedSlot||e.parentNode||($y(e)?e.host:null)||Qa(e)}function eC(e){return!_o(e)||na(e).position==="fixed"?null:e.offsetParent}function hj(e){var t=/firefox/i.test($0()),n=/Trident/i.test($0());if(n&&_o(e)){var r=na(e);if(r.position==="fixed")return null}var o=Cv(e);for($y(o)&&(o=o.host);_o(o)&&["html","body"].indexOf(Ai(o))<0;){var i=na(o);if(i.transform!=="none"||i.perspective!=="none"||i.contain==="paint"||["transform","perspective"].indexOf(i.willChange)!==-1||t&&i.willChange==="filter"||t&&i.filter&&i.filter!=="none")return o;o=o.parentNode}return null}function gd(e){for(var t=xo(e),n=eC(e);n&&vj(n)&&na(n).position==="static";)n=eC(n);return n&&(Ai(n)==="html"||Ai(n)==="body"&&na(n).position==="static")?t:n||hj(e)||t}function Ty(e){return["top","bottom"].indexOf(e)>=0?"x":"y"}function mu(e,t,n){return Ss(e,_p(t,n))}function gj(e,t,n){var r=mu(e,t,n);return r>n?n:r}function w2(){return{top:0,right:0,bottom:0,left:0}}function x2(e){return Object.assign({},w2(),e)}function S2(e,t){return t.reduce(function(n,r){return n[r]=e,n},{})}var mj=function(t,n){return t=typeof t=="function"?t(Object.assign({},n.rects,{placement:n.placement})):t,x2(typeof t!="number"?t:S2(t,hd))};function bj(e){var t,n=e.state,r=e.name,o=e.options,i=n.elements.arrow,a=n.modifiersData.popperOffsets,s=Ni(n.placement),c=Ty(s),u=[so,Wo].indexOf(s)>=0,p=u?"height":"width";if(!(!i||!a)){var v=mj(o.padding,n),h=Iy(i),m=c==="y"?ao:so,b=c==="y"?Vo:Wo,y=n.rects.reference[p]+n.rects.reference[c]-a[c]-n.rects.popper[p],w=a[c]-n.rects.reference[c],C=gd(i),S=C?c==="y"?C.clientHeight||0:C.clientWidth||0:0,E=y/2-w/2,k=v[m],O=S-h[p]-v[b],$=S/2-h[p]/2+E,T=mu(k,$,O),M=c;n.modifiersData[r]=(t={},t[M]=T,t.centerOffset=T-$,t)}}function yj(e){var t=e.state,n=e.options,r=n.element,o=r===void 0?"[data-popper-arrow]":r;o!=null&&(typeof o=="string"&&(o=t.elements.popper.querySelector(o),!o)||y2(t.elements.popper,o)&&(t.elements.arrow=o))}const wj={name:"arrow",enabled:!0,phase:"main",fn:bj,effect:yj,requires:["popperOffsets"],requiresIfExists:["preventOverflow"]};function Fl(e){return e.split("-")[1]}var xj={top:"auto",right:"auto",bottom:"auto",left:"auto"};function Sj(e,t){var n=e.x,r=e.y,o=t.devicePixelRatio||1;return{x:zl(n*o)/o||0,y:zl(r*o)/o||0}}function tC(e){var t,n=e.popper,r=e.popperRect,o=e.placement,i=e.variation,a=e.offsets,s=e.position,c=e.gpuAcceleration,u=e.adaptive,p=e.roundOffsets,v=e.isFixed,h=a.x,m=h===void 0?0:h,b=a.y,y=b===void 0?0:b,w=typeof p=="function"?p({x:m,y}):{x:m,y};m=w.x,y=w.y;var C=a.hasOwnProperty("x"),S=a.hasOwnProperty("y"),E=so,k=ao,O=window;if(u){var $=gd(n),T="clientHeight",M="clientWidth";if($===xo(n)&&($=Qa(n),na($).position!=="static"&&s==="absolute"&&(T="scrollHeight",M="scrollWidth")),o===ao||(o===so||o===Wo)&&i===Au){k=Vo;var P=v&&$===O&&O.visualViewport?O.visualViewport.height:$[T];y-=P-r.height,y*=c?1:-1}if(o===so||(o===ao||o===Vo)&&i===Au){E=Wo;var R=v&&$===O&&O.visualViewport?O.visualViewport.width:$[M];m-=R-r.width,m*=c?1:-1}}var A=Object.assign({position:s},u&&xj),V=p===!0?Sj({x:m,y},xo(n)):{x:m,y};if(m=V.x,y=V.y,c){var z;return Object.assign({},A,(z={},z[k]=S?"0":"",z[E]=C?"0":"",z.transform=(O.devicePixelRatio||1)<=1?"translate("+m+"px, "+y+"px)":"translate3d("+m+"px, "+y+"px, 0)",z))}return Object.assign({},A,(t={},t[k]=S?y+"px":"",t[E]=C?m+"px":"",t.transform="",t))}function Cj(e){var t=e.state,n=e.options,r=n.gpuAcceleration,o=r===void 0?!0:r,i=n.adaptive,a=i===void 0?!0:i,s=n.roundOffsets,c=s===void 0?!0:s,u={placement:Ni(t.placement),variation:Fl(t.placement),popper:t.elements.popper,popperRect:t.rects.popper,gpuAcceleration:o,isFixed:t.options.strategy==="fixed"};t.modifiersData.popperOffsets!=null&&(t.styles.popper=Object.assign({},t.styles.popper,tC(Object.assign({},u,{offsets:t.modifiersData.popperOffsets,position:t.options.strategy,adaptive:a,roundOffsets:c})))),t.modifiersData.arrow!=null&&(t.styles.arrow=Object.assign({},t.styles.arrow,tC(Object.assign({},u,{offsets:t.modifiersData.arrow,position:"absolute",adaptive:!1,roundOffsets:c})))),t.attributes.popper=Object.assign({},t.attributes.popper,{"data-popper-placement":t.placement})}const Ej={name:"computeStyles",enabled:!0,phase:"beforeWrite",fn:Cj,data:{}};var zf={passive:!0};function kj(e){var t=e.state,n=e.instance,r=e.options,o=r.scroll,i=o===void 0?!0:o,a=r.resize,s=a===void 0?!0:a,c=xo(t.elements.popper),u=[].concat(t.scrollParents.reference,t.scrollParents.popper);return i&&u.forEach(function(p){p.addEventListener("scroll",n.update,zf)}),s&&c.addEventListener("resize",n.update,zf),function(){i&&u.forEach(function(p){p.removeEventListener("scroll",n.update,zf)}),s&&c.removeEventListener("resize",n.update,zf)}}const Oj={name:"eventListeners",enabled:!0,phase:"write",fn:function(){},effect:kj,data:{}};var $j={left:"right",right:"left",bottom:"top",top:"bottom"};function Cp(e){return e.replace(/left|right|bottom|top/g,function(t){return $j[t]})}var Ij={start:"end",end:"start"};function nC(e){return e.replace(/start|end/g,function(t){return Ij[t]})}function Py(e){var t=xo(e),n=t.pageXOffset,r=t.pageYOffset;return{scrollLeft:n,scrollTop:r}}function My(e){return Hl(Qa(e)).left+Py(e).scrollLeft}function Tj(e,t){var n=xo(e),r=Qa(e),o=n.visualViewport,i=r.clientWidth,a=r.clientHeight,s=0,c=0;if(o){i=o.width,a=o.height;var u=b2();(u||!u&&t==="fixed")&&(s=o.offsetLeft,c=o.offsetTop)}return{width:i,height:a,x:s+My(e),y:c}}function Pj(e){var t,n=Qa(e),r=Py(e),o=(t=e.ownerDocument)==null?void 0:t.body,i=Ss(n.scrollWidth,n.clientWidth,o?o.scrollWidth:0,o?o.clientWidth:0),a=Ss(n.scrollHeight,n.clientHeight,o?o.scrollHeight:0,o?o.clientHeight:0),s=-r.scrollLeft+My(e),c=-r.scrollTop;return na(o||n).direction==="rtl"&&(s+=Ss(n.clientWidth,o?o.clientWidth:0)-i),{width:i,height:a,x:s,y:c}}function Ny(e){var t=na(e),n=t.overflow,r=t.overflowX,o=t.overflowY;return/auto|scroll|overlay|hidden/.test(n+o+r)}function C2(e){return["html","body","#document"].indexOf(Ai(e))>=0?e.ownerDocument.body:_o(e)&&Ny(e)?e:C2(Cv(e))}function bu(e,t){var n;t===void 0&&(t=[]);var r=C2(e),o=r===((n=e.ownerDocument)==null?void 0:n.body),i=xo(r),a=o?[i].concat(i.visualViewport||[],Ny(r)?r:[]):r,s=t.concat(a);return o?s:s.concat(bu(Cv(a)))}function I0(e){return Object.assign({},e,{left:e.x,top:e.y,right:e.x+e.width,bottom:e.y+e.height})}function Mj(e,t){var n=Hl(e,!1,t==="fixed");return n.top=n.top+e.clientTop,n.left=n.left+e.clientLeft,n.bottom=n.top+e.clientHeight,n.right=n.left+e.clientWidth,n.width=e.clientWidth,n.height=e.clientHeight,n.x=n.left,n.y=n.top,n}function rC(e,t,n){return t===g2?I0(Tj(e,n)):Os(t)?Mj(t,n):I0(Pj(Qa(e)))}function Nj(e){var t=bu(Cv(e)),n=["absolute","fixed"].indexOf(na(e).position)>=0,r=n&&_o(e)?gd(e):e;return Os(r)?t.filter(function(o){return Os(o)&&y2(o,r)&&Ai(o)!=="body"}):[]}function Rj(e,t,n,r){var o=t==="clippingParents"?Nj(e):[].concat(t),i=[].concat(o,[n]),a=i[0],s=i.reduce(function(c,u){var p=rC(e,u,r);return c.top=Ss(p.top,c.top),c.right=_p(p.right,c.right),c.bottom=_p(p.bottom,c.bottom),c.left=Ss(p.left,c.left),c},rC(e,a,r));return s.width=s.right-s.left,s.height=s.bottom-s.top,s.x=s.left,s.y=s.top,s}function E2(e){var t=e.reference,n=e.element,r=e.placement,o=r?Ni(r):null,i=r?Fl(r):null,a=t.x+t.width/2-n.width/2,s=t.y+t.height/2-n.height/2,c;switch(o){case ao:c={x:a,y:t.y-n.height};break;case Vo:c={x:a,y:t.y+t.height};break;case Wo:c={x:t.x+t.width,y:s};break;case so:c={x:t.x-n.width,y:s};break;default:c={x:t.x,y:t.y}}var u=o?Ty(o):null;if(u!=null){var p=u==="y"?"height":"width";switch(i){case Al:c[u]=c[u]-(t[p]/2-n[p]/2);break;case Au:c[u]=c[u]+(t[p]/2-n[p]/2);break}}return c}function zu(e,t){t===void 0&&(t={});var n=t,r=n.placement,o=r===void 0?e.placement:r,i=n.strategy,a=i===void 0?e.strategy:i,s=n.boundary,c=s===void 0?JD:s,u=n.rootBoundary,p=u===void 0?g2:u,v=n.elementContext,h=v===void 0?Gc:v,m=n.altBoundary,b=m===void 0?!1:m,y=n.padding,w=y===void 0?0:y,C=x2(typeof w!="number"?w:S2(w,hd)),S=h===Gc?ej:Gc,E=e.rects.popper,k=e.elements[b?S:h],O=Rj(Os(k)?k:k.contextElement||Qa(e.elements.popper),c,p,a),$=Hl(e.elements.reference),T=E2({reference:$,element:E,strategy:"absolute",placement:o}),M=I0(Object.assign({},E,T)),P=h===Gc?M:$,R={top:O.top-P.top+C.top,bottom:P.bottom-O.bottom+C.bottom,left:O.left-P.left+C.left,right:P.right-O.right+C.right},A=e.modifiersData.offset;if(h===Gc&&A){var V=A[o];Object.keys(R).forEach(function(z){var B=[Wo,Vo].indexOf(z)>=0?1:-1,_=[ao,Vo].indexOf(z)>=0?"y":"x";R[z]+=V[_]*B})}return R}function Dj(e,t){t===void 0&&(t={});var n=t,r=n.placement,o=n.boundary,i=n.rootBoundary,a=n.padding,s=n.flipVariations,c=n.allowedAutoPlacements,u=c===void 0?m2:c,p=Fl(r),v=p?s?JS:JS.filter(function(b){return Fl(b)===p}):hd,h=v.filter(function(b){return u.indexOf(b)>=0});h.length===0&&(h=v);var m=h.reduce(function(b,y){return b[y]=zu(e,{placement:y,boundary:o,rootBoundary:i,padding:a})[Ni(y)],b},{});return Object.keys(m).sort(function(b,y){return m[b]-m[y]})}function jj(e){if(Ni(e)===Oy)return[];var t=Cp(e);return[nC(e),t,nC(t)]}function Lj(e){var t=e.state,n=e.options,r=e.name;if(!t.modifiersData[r]._skip){for(var o=n.mainAxis,i=o===void 0?!0:o,a=n.altAxis,s=a===void 0?!0:a,c=n.fallbackPlacements,u=n.padding,p=n.boundary,v=n.rootBoundary,h=n.altBoundary,m=n.flipVariations,b=m===void 0?!0:m,y=n.allowedAutoPlacements,w=t.options.placement,C=Ni(w),S=C===w,E=c||(S||!b?[Cp(w)]:jj(w)),k=[w].concat(E).reduce(function(q,J){return q.concat(Ni(J)===Oy?Dj(t,{placement:J,boundary:p,rootBoundary:v,padding:u,flipVariations:b,allowedAutoPlacements:y}):J)},[]),O=t.rects.reference,$=t.rects.popper,T=new Map,M=!0,P=k[0],R=0;R=0,_=B?"width":"height",H=zu(t,{placement:A,boundary:p,rootBoundary:v,altBoundary:h,padding:u}),j=B?z?Wo:so:z?Vo:ao;O[_]>$[_]&&(j=Cp(j));var L=Cp(j),F=[];if(i&&F.push(H[V]<=0),s&&F.push(H[j]<=0,H[L]<=0),F.every(function(q){return q})){P=A,M=!1;break}T.set(A,F)}if(M)for(var U=b?3:1,D=function(J){var Y=k.find(function(Q){var te=T.get(Q);if(te)return te.slice(0,J).every(function(ce){return ce})});if(Y)return P=Y,"break"},W=U;W>0;W--){var G=D(W);if(G==="break")break}t.placement!==P&&(t.modifiersData[r]._skip=!0,t.placement=P,t.reset=!0)}}const Bj={name:"flip",enabled:!0,phase:"main",fn:Lj,requiresIfExists:["offset"],data:{_skip:!1}};function oC(e,t,n){return n===void 0&&(n={x:0,y:0}),{top:e.top-t.height-n.y,right:e.right-t.width+n.x,bottom:e.bottom-t.height+n.y,left:e.left-t.width-n.x}}function iC(e){return[ao,Wo,Vo,so].some(function(t){return e[t]>=0})}function Aj(e){var t=e.state,n=e.name,r=t.rects.reference,o=t.rects.popper,i=t.modifiersData.preventOverflow,a=zu(t,{elementContext:"reference"}),s=zu(t,{altBoundary:!0}),c=oC(a,r),u=oC(s,o,i),p=iC(c),v=iC(u);t.modifiersData[n]={referenceClippingOffsets:c,popperEscapeOffsets:u,isReferenceHidden:p,hasPopperEscaped:v},t.attributes.popper=Object.assign({},t.attributes.popper,{"data-popper-reference-hidden":p,"data-popper-escaped":v})}const zj={name:"hide",enabled:!0,phase:"main",requiresIfExists:["preventOverflow"],fn:Aj};function Hj(e,t,n){var r=Ni(e),o=[so,ao].indexOf(r)>=0?-1:1,i=typeof n=="function"?n(Object.assign({},t,{placement:e})):n,a=i[0],s=i[1];return a=a||0,s=(s||0)*o,[so,Wo].indexOf(r)>=0?{x:s,y:a}:{x:a,y:s}}function Fj(e){var t=e.state,n=e.options,r=e.name,o=n.offset,i=o===void 0?[0,0]:o,a=m2.reduce(function(p,v){return p[v]=Hj(v,t.rects,i),p},{}),s=a[t.placement],c=s.x,u=s.y;t.modifiersData.popperOffsets!=null&&(t.modifiersData.popperOffsets.x+=c,t.modifiersData.popperOffsets.y+=u),t.modifiersData[r]=a}const _j={name:"offset",enabled:!0,phase:"main",requires:["popperOffsets"],fn:Fj};function Vj(e){var t=e.state,n=e.name;t.modifiersData[n]=E2({reference:t.rects.reference,element:t.rects.popper,strategy:"absolute",placement:t.placement})}const Wj={name:"popperOffsets",enabled:!0,phase:"read",fn:Vj,data:{}};function Uj(e){return e==="x"?"y":"x"}function Kj(e){var t=e.state,n=e.options,r=e.name,o=n.mainAxis,i=o===void 0?!0:o,a=n.altAxis,s=a===void 0?!1:a,c=n.boundary,u=n.rootBoundary,p=n.altBoundary,v=n.padding,h=n.tether,m=h===void 0?!0:h,b=n.tetherOffset,y=b===void 0?0:b,w=zu(t,{boundary:c,rootBoundary:u,padding:v,altBoundary:p}),C=Ni(t.placement),S=Fl(t.placement),E=!S,k=Ty(C),O=Uj(k),$=t.modifiersData.popperOffsets,T=t.rects.reference,M=t.rects.popper,P=typeof y=="function"?y(Object.assign({},t.rects,{placement:t.placement})):y,R=typeof P=="number"?{mainAxis:P,altAxis:P}:Object.assign({mainAxis:0,altAxis:0},P),A=t.modifiersData.offset?t.modifiersData.offset[t.placement]:null,V={x:0,y:0};if($){if(i){var z,B=k==="y"?ao:so,_=k==="y"?Vo:Wo,H=k==="y"?"height":"width",j=$[k],L=j+w[B],F=j-w[_],U=m?-M[H]/2:0,D=S===Al?T[H]:M[H],W=S===Al?-M[H]:-T[H],G=t.elements.arrow,q=m&&G?Iy(G):{width:0,height:0},J=t.modifiersData["arrow#persistent"]?t.modifiersData["arrow#persistent"].padding:w2(),Y=J[B],Q=J[_],te=mu(0,T[H],q[H]),ce=E?T[H]/2-U-te-Y-R.mainAxis:D-te-Y-R.mainAxis,se=E?-T[H]/2+U+te+Q+R.mainAxis:W+te+Q+R.mainAxis,ne=t.elements.arrow&&gd(t.elements.arrow),ae=ne?k==="y"?ne.clientTop||0:ne.clientLeft||0:0,ee=(z=A==null?void 0:A[k])!=null?z:0,re=j+ce-ee-ae,le=j+se-ee,pe=mu(m?_p(L,re):L,j,m?Ss(F,le):F);$[k]=pe,V[k]=pe-j}if(s){var Oe,ge=k==="x"?ao:so,Re=k==="x"?Vo:Wo,ye=$[O],Te=O==="y"?"height":"width",Ae=ye+w[ge],me=ye-w[Re],Ie=[ao,so].indexOf(C)!==-1,Le=(Oe=A==null?void 0:A[O])!=null?Oe:0,Be=Ie?Ae:ye-T[Te]-M[Te]-Le+R.altAxis,et=Ie?ye+T[Te]+M[Te]-Le-R.altAxis:me,rt=m&&Ie?gj(Be,ye,et):mu(m?Be:Ae,ye,m?et:me);$[O]=rt,V[O]=rt-ye}t.modifiersData[r]=V}}const qj={name:"preventOverflow",enabled:!0,phase:"main",fn:Kj,requiresIfExists:["offset"]};function Xj(e){return{scrollLeft:e.scrollLeft,scrollTop:e.scrollTop}}function Gj(e){return e===xo(e)||!_o(e)?Py(e):Xj(e)}function Yj(e){var t=e.getBoundingClientRect(),n=zl(t.width)/e.offsetWidth||1,r=zl(t.height)/e.offsetHeight||1;return n!==1||r!==1}function Qj(e,t,n){n===void 0&&(n=!1);var r=_o(t),o=_o(t)&&Yj(t),i=Qa(t),a=Hl(e,o,n),s={scrollLeft:0,scrollTop:0},c={x:0,y:0};return(r||!r&&!n)&&((Ai(t)!=="body"||Ny(i))&&(s=Gj(t)),_o(t)?(c=Hl(t,!0),c.x+=t.clientLeft,c.y+=t.clientTop):i&&(c.x=My(i))),{x:a.left+s.scrollLeft-c.x,y:a.top+s.scrollTop-c.y,width:a.width,height:a.height}}function Zj(e){var t=new Map,n=new Set,r=[];e.forEach(function(i){t.set(i.name,i)});function o(i){n.add(i.name);var a=[].concat(i.requires||[],i.requiresIfExists||[]);a.forEach(function(s){if(!n.has(s)){var c=t.get(s);c&&o(c)}}),r.push(i)}return e.forEach(function(i){n.has(i.name)||o(i)}),r}function Jj(e){var t=Zj(e);return uj.reduce(function(n,r){return n.concat(t.filter(function(o){return o.phase===r}))},[])}function eL(e){var t;return function(){return t||(t=new Promise(function(n){Promise.resolve().then(function(){t=void 0,n(e())})})),t}}function tL(e){var t=e.reduce(function(n,r){var o=n[r.name];return n[r.name]=o?Object.assign({},o,r,{options:Object.assign({},o.options,r.options),data:Object.assign({},o.data,r.data)}):r,n},{});return Object.keys(t).map(function(n){return t[n]})}var aC={placement:"bottom",modifiers:[],strategy:"absolute"};function sC(){for(var e=arguments.length,t=new Array(e),n=0;n{n.saveSettings()},100)}display(){const{containerEl:n}=this;n.empty(),this.generateSettingsTitle(),this.addTabHeader()}generateSettingsTitle(){const n=this.containerEl.createDiv("wb-setting-title");n.createEl("h2",{text:"Web Browser"}),this.generateSearchBar(n)}addTabHeader(){const n=this.containerEl.createEl("nav",{cls:"wb-setting-header"});this.navigateEl=n.createDiv("wb-setting-tab-group");const r=this.containerEl.createDiv("wb-setting-content");this.createTabAndContent("General",this.navigateEl,r,(o,i)=>this.generateGeneralSettings(i,o)),this.createTabAndContent("Search",this.navigateEl,r,(o,i)=>this.generateSearchSettings(i,o)),this.createTabAndContent("Theme",this.navigateEl,r,(o,i)=>this.generateThemeSettings(i,o)),this.createTabAndContent("Bookmark",this.navigateEl,r,(o,i)=>this.generateBookmarkManagerSettings(i,o)),this.createSearchZeroState(r)}generateSearchBar(n){const r=new ke.Setting(n);r.settingEl.style.border="none",r.addSearch(o=>{this.search=o}),this.search.setPlaceholder(je("Search all settings")),this.search.inputEl.oninput=()=>{for(const o of this.tabContent){const i=o[1];i.navButton.removeClass("wb-navigation-item-selected"),i.content.show(),i.heading.show();const a=this.search.getValue();this.selectedTab==""&&a.trim()!=""&&this.searchSettings(a.toLowerCase()),this.selectedTab=""}this.navigateEl.addClass("wb-setting-searching")},this.search.inputEl.onblur=()=>{this.navigateEl.removeClass("wb-setting-searching")},this.search.onChange(o=>{o===""&&this.navigateEl.children[0].dispatchEvent(new PointerEvent("click")),this.searchSettings(o.toLowerCase())})}createTabAndContent(n,r,o,i){const a=this.selectedTab===n,s=r.createDiv("wb-navigation-item");s.addClass("wb-desktop"),ke.setIcon(s.createEl("div",{cls:"wb-navigation-item-icon"}),sL[n]),s.createSpan().setText(je(n)),s.onclick=()=>{if(this.selectedTab==n)return;s.addClass("wb-navigation-item-selected");const v=this.tabContent.get(n);if((v==null?void 0:v.content).show(),this.selectedTab!=""){const h=this.tabContent.get(this.selectedTab);h==null||h.navButton.removeClass("wb-navigation-item-selected"),(h==null?void 0:h.content).hide()}else{this.searchZeroState.hide();for(const h of this.searchSettingInfo)for(const m of h[1])m.containerEl.show();for(const h of this.tabContent){const m=h[1];m.heading.hide(),n!==h[0]&&m.content.hide()}}this.selectedTab=n};const u=o.createDiv("wb-tab-settings"),p=u.createEl("h2",{cls:"wb-setting-heading",text:n+" Settings"});p.hide(),u.id=n.toLowerCase().replace(" ","-"),a?s.addClass("wb-navigation-item-selected"):u.hide(),i&&i(u,n),this.tabContent.set(n,{content:u,heading:p,navButton:s})}searchSettings(n){var i;const r=new Set,o=(a,s)=>{a.show(),r.has(s)||r.add(s)};for(const a of this.searchSettingInfo){const s=a[0],c=a[1];for(const u of c)if(n.trim()===""||(i=u.alias)!=null&&i.includes(n)||u.description.includes(n)||u.name.includes(n))o(u.containerEl,s);else if(u.options)for(const p of u.options){if(p.description.toLowerCase().includes(n)||p.name.toLowerCase().includes(n)){o(u.containerEl,s);break}else if(p.options){for(const v of p.options)if(v.description.toLowerCase().includes(n)||v.value.toLowerCase().includes(n)){o(u.containerEl,s);break}}u.containerEl.hide()}else u.containerEl.hide()}for(const a of this.tabContent)r.has(a[0])?a[1].heading.show():a[1].heading.hide();r.size===0?this.searchZeroState.show():this.searchZeroState.hide()}generateGeneralSettings(n,r){this.addOpenInSameTab(n,r),this.addHoverPopover(n,r),this.addEnableHTMLPreview(n,r),this.addTreeView(n,r),this.addInlinePreview(n,r),this.addRefreshButton(n,r),this.addHighlightFormat(n,r),this.addMarkdownPath(n,r),this.addReplaceIframeInCanvas(n,r),this.addOpenInObsidianWeb(n,r),this.addAboutInfo(n,r)}generateSearchSettings(n,r){this.addInpageSearch(n,r),this.addSearchEngine(n,r)}generateThemeSettings(n,r){this.useIconList(n,r),this.addDarkMode(n,r),this.addRandomBackground(n,r),this.addMyIcons(n,r)}generateBookmarkManagerSettings(n,r){this.addBookmarkManagerSettings(n,r)}addSettingToMasterSettingsList(n,r,o="",i="",a=[],s=""){var u;const c={containerEl:r,name:o.toLowerCase(),description:i.toLowerCase(),options:a,alias:s};this.searchSettingInfo.has(n)?(u=this.searchSettingInfo.get(n))==null||u.push(c):this.searchSettingInfo.set(n,[c])}createSearchZeroState(n){this.searchZeroState=n.createDiv(),this.searchZeroState.hide(),this.searchZeroState.createEl(ke.Platform.isMobile?"h3":"h2",{text:"No settings match search"}).style.textAlign="center"}addRefreshButton(n,r){const o=je("Show Refresh Button Near Search Bar"),i=new ke.Setting(r).setName(o).addToggle(a=>{a.setValue(this.plugin.settings.showRefreshButton).onChange(async s=>{this.plugin.settings.showRefreshButton=s,this.applySettingsUpdate()})});this.addSettingToMasterSettingsList(n,i.settingEl,o)}addInpageSearch(n,r){let o=je("Show Search Bar In Empty Page"),i=new ke.Setting(r).setName(o).addToggle(a=>{a.setValue(this.plugin.settings.showSearchBarInPage).onChange(async s=>{this.plugin.settings.showSearchBarInPage=s,this.applySettingsUpdate(),setTimeout(()=>{this.display()},200)})});this.addSettingToMasterSettingsList(n,i.settingEl,o),this.plugin.settings.showSearchBarInPage&&(o="Show last opened files",i=new ke.Setting(r).setName(o).addToggle(a=>{a.setValue(this.plugin.settings.lastOpenedFiles).onChange(async s=>{this.plugin.settings.lastOpenedFiles=s,this.applySettingsUpdate()})}),this.addSettingToMasterSettingsList(n,i.settingEl,o))}addSearchEngine(n,r){let o=je("Default Search Engine"),i=new ke.Setting(r).setName(o).addDropdown(async a=>{const s=a.addOption("duckduckgo",je("DuckDuckGo")).addOption("google",je("Google")).addOption("bing",je("Bing")).addOption("yahoo",je("Yahoo")).addOption("baidu",je("Baidu"));this.plugin.settings.customSearchEngine.forEach((c,u)=>{s.addOption(c.name,c.name)}),s.setValue(this.plugin.settings.defaultSearchEngine).onChange(async c=>{this.plugin.settings.defaultSearchEngine=c,this.applySettingsUpdate(),this.display()})});this.addSettingToMasterSettingsList(n,i.settingEl,o),o=je("Show Other Search Engines When Searching")+" "+je("(Reload to take effect)"),i=new ke.Setting(r).setName(o).addToggle(a=>{a.setValue(this.plugin.settings.showOtherSearchEngines).onChange(async s=>{this.plugin.settings.showOtherSearchEngines=s,this.applySettingsUpdate()})}),this.addSettingToMasterSettingsList(n,i.settingEl,o),o=je("Focus search bar via keyboard"),i=new ke.Setting(r).setName(o).addToggle(a=>{a.setValue(this.plugin.settings.focusSearchBarViaKeyboard).onChange(async s=>{this.plugin.settings.focusSearchBarViaKeyboard=s,this.applySettingsUpdate(),setTimeout(()=>{this.display()},200)})}),this.addSettingToMasterSettingsList(n,i.settingEl,o),this.plugin.settings.focusSearchBarViaKeyboard&&(o=je("Disable / to search when on these sites"),i=new ke.Setting(r).setName(o).addText(a=>{a.setPlaceholder(zo.ignoreList.join(",")).setValue(this.plugin.settings.ignoreList.join(",")).onChange(async s=>{this.plugin.settings.ignoreList=s.split(","),this.applySettingsUpdate()})}),this.addSettingToMasterSettingsList(n,i.settingEl,o)),o=je("Always Show Custom Engines"),i=new ke.Setting(r).setName(o).addToggle(a=>{a.setValue(this.plugin.settings.alwaysShowCustomSearch).onChange(async s=>{this.plugin.settings.alwaysShowCustomSearch=s,this.applySettingsUpdate(),this.display()})}),this.addSettingToMasterSettingsList(n,i.settingEl,o),this.plugin.settings.alwaysShowCustomSearch&&(typeof this.plugin.settings.customSearchEngine!="object"&&(this.plugin.settings.customSearchEngine=zo.customSearchEngine),o=je("Add new custom search engine"),i=new ke.Setting(r).setName(o).addButton(a=>a.setButtonText("+").onClick(async()=>{this.plugin.settings.customSearchEngine.push({name:`Custom Search ${this.plugin.settings.customSearchEngine.length+1}`,url:"https://www.google.com/search?q="}),await this.plugin.saveSettings(),this.display()})),this.addSettingToMasterSettingsList(n,i.settingEl,o),this.plugin.settings.customSearchEngine.forEach((a,s)=>{o=a.name?a.name:je("Custom Search")+`${this.plugin.settings.customSearchEngine.length>1?` ${s+1}`:""}`;const c=new ke.Setting(r).setClass("search-engine-setting").setName(o).addButton(h=>h.setButtonText(je("Delete Custom Search")).onClick(async()=>{this.plugin.settings.customSearchEngine.splice(s,1),await this.plugin.saveSettings(),this.display()})),u=c.settingEl.createEl("div","search-engine-main-settings"),p=u.createEl("div","search-engine-main-settings-name"),v=u.createEl("div","search-engine-main-settings-url");p.createEl("label",{text:je("Name")}),p.createEl("input",{cls:"search-engine-name-input",type:"text",value:a.name}).on("change",".search-engine-name-input",async h=>{const m=h.target;this.plugin.settings.customSearchEngine[s]={...a,name:m.value},await this.plugin.saveSettings()}),v.createEl("label",{text:je("Url")}),v.createEl("input",{cls:"search-engine-url-input",type:"text",value:a.url}).on("change",".search-engine-url-input",async h=>{const m=h.target;this.plugin.settings.customSearchEngine[s]={...a,url:m.value},await this.plugin.saveSettings()}),this.addSettingToMasterSettingsList(n,c.settingEl,o+je("Search Engine"))}))}addMarkdownPath(n,r){const o=je("Save As Markdown Path"),i=new ke.Setting(r).setName(o).addText(a=>a.setPlaceholder(je("Path like /_Tempcard")).setValue(this.plugin.settings.markdownPath).onChange(async s=>{this.plugin.settings.markdownPath=s,this.applySettingsUpdate()}));this.addSettingToMasterSettingsList(n,i.settingEl,o)}addHighlightFormat(n,r){let o=je("Custom Link to Highlight Format"),i=new ke.Setting(r).setName(o).addToggle(s=>{s.setValue(this.plugin.settings.customHighlightFormat).onChange(async c=>{this.plugin.settings.customHighlightFormat=c,this.applySettingsUpdate(),this.display()})});if(this.addSettingToMasterSettingsList(n,i.settingEl,o),!this.plugin.settings.customHighlightFormat)return;o=je("Copy Link to Highlight Format");let a=je("Set copy link to text fragment format. [{CONTENT}]({URL}) By default. You can also set {TIME:YYYY-MM-DD HH:mm:ss} to get the current date.");i=new ke.Setting(r).setName(o).setDesc(a).addText(s=>s.setPlaceholder(zo.highlightFormat).setValue(this.plugin.settings.highlightFormat).onChange(async c=>{c===""&&(this.plugin.settings.highlightFormat=zo.highlightFormat,this.applySettingsUpdate(),this.display()),this.plugin.settings.highlightFormat=c,this.applySettingsUpdate()})),this.addSettingToMasterSettingsList(n,i.settingEl,o,a),o=je("Jump to Opened Page"),a=je("When click on the URL from same domain name in the note, jump to the same surfing view rather than opening a new Surfing view."),i=new ke.Setting(r).setName(o).setDesc(a).addToggle(s=>{s.setValue(this.plugin.settings.highlightInSameTab).onChange(async c=>{this.plugin.settings.highlightInSameTab=c,this.applySettingsUpdate()})}),this.addSettingToMasterSettingsList(n,i.settingEl,o,a)}addOpenInSameTab(n,r){const o=je("Open URL In Same Tab"),i=new ke.Setting(r).setName(o).addToggle(a=>{a.setValue(this.plugin.settings.openInSameTab).onChange(async s=>{this.plugin.settings.openInSameTab=s,this.applySettingsUpdate(),this.display()})});this.addSettingToMasterSettingsList(n,i.settingEl,o)}addHoverPopover(n,r){const o=je("Hover Popover"),i=je("Show a popover when hover on the link."),a=new ke.Setting(r).setName(o).setDesc(i).addToggle(s=>{s.setValue(this.plugin.settings.hoverPopover).onChange(async c=>{this.plugin.settings.hoverPopover=c,this.applySettingsUpdate()})});this.addSettingToMasterSettingsList(n,a.settingEl,o,i)}addEnableHTMLPreview(n,r){const o=je("Enable HTML Preview"),i=je("Enable HTML Preview in Surfing"),a=new ke.Setting(r).setName(o).setDesc(i).addToggle(s=>{s.setValue(this.plugin.settings.enableHtmlPreview).onChange(async c=>{this.plugin.settings.enableHtmlPreview=c,this.applySettingsUpdate()})});this.addSettingToMasterSettingsList(n,a.settingEl,o,i)}addTreeView(n,r){const o=je("Enable Tree View"),i=je("Enable Tree View in Surfing"),a=new ke.Setting(r).setName(o).setDesc(i).addToggle(s=>{s.setValue(this.plugin.settings.enableTreeView).onChange(async c=>{this.plugin.settings.enableTreeView=c,this.applySettingsUpdate()})});this.addSettingToMasterSettingsList(n,a.settingEl,o,i)}addInlinePreview(n,r){const o=je("Enable Inline Preview")+" [Deprecated]",i=je("Enable inline preview with surfing. Currently only support Live preview"),a=new ke.Setting(r).setName(o).setDesc(i).addToggle(s=>{s.setValue(this.plugin.settings.supportLivePreviewInlineUrl).onChange(async c=>{this.plugin.settings.supportLivePreviewInlineUrl=c,this.applySettingsUpdate()})});this.addSettingToMasterSettingsList(n,a.settingEl,o,i)}addReplaceIframeInCanvas(n,r){const o=je("[Experimental] Replace Iframe In Canvas")+je("(Reload to take effect)"),i=new ke.Setting(r).setName(o).addToggle(a=>{a.setValue(this.plugin.settings.useWebview).onChange(async s=>{this.plugin.settings.useWebview=s,this.applySettingsUpdate()})});this.addSettingToMasterSettingsList(n,i.settingEl,o)}addOpenInObsidianWeb(n,r){const o=je("Open URL In Obsidian Web From Other Software")+" "+je("(Reload to take effect)"),i=new ke.Setting(r).setName(o).addToggle(c=>{c.setValue(this.plugin.settings.openInObsidianWeb).onChange(async u=>{this.plugin.settings.openInObsidianWeb=u,this.applySettingsUpdate(),this.display()})});if(this.addSettingToMasterSettingsList(n,i.settingEl,o),!this.plugin.settings.openInObsidianWeb)return;const a=r.createDiv({cls:"bookmarklets-container"}),s=a.createEl("button",{cls:"wb-btn"});s.createEl("a",{text:"Obsidian BookmarkLets Code",cls:"cm-url",href:"javascript:(function(){var%20i%20%3Ddocument.location.href%3B%20document.location.href%3D%22obsidian%3A%2F%2Fweb-open%3Furl%3D%22%20%2B%20encodeURIComponent%28i%29%3B})();"}),s.addEventListener("click",c=>{c.preventDefault(),Un.clipboard.writeText("javascript:(function(){var%20i%20%3Ddocument.location.href%3B%20document.location.href%3D%22obsidian%3A%2F%2Fweb-open%3Furl%3D%22%20%2B%20encodeURIComponent%28i%29%3B})();"),new ke.Notice(je("Copy BookmarkLets Success"))}),a.createEl("span",{cls:"wb-btn-tip",text:je(" <- Drag or click on me")}),this.addSettingToMasterSettingsList(n,a,o)}addAboutInfo(n,r){const o=r.createDiv({cls:"wb-about-card"});ke.setIcon(o.createDiv({cls:"wb-about-icon"}),"surfing"),o.createEl("div",{cls:"wb-about-text",text:"Surfing"});const i=this.plugin.manifest.version,a="https://github.com/Quorafind/Obsidian-Surfing/releases/tag/"+i;o.createEl("a",{cls:"wb-about-version",href:a,text:i}),this.addSettingToMasterSettingsList(n,o,"surfing")}useIconList(n,r){const o=je("Use icon list to replace defult text actions in empty view")+je("(Reload to take effect)"),i=new ke.Setting(r).setName(o).addToggle(a=>{a.setValue(this.plugin.settings.useIconList).onChange(async s=>{this.plugin.settings.useIconList=s,this.applySettingsUpdate()})});this.addSettingToMasterSettingsList(n,i.settingEl,o)}addDarkMode(n,r){const o=je("Toggle Dark Mode"),i=new ke.Setting(r).setName(o).addToggle(a=>{a.setValue(this.plugin.settings.darkMode).onChange(async s=>{this.plugin.settings.darkMode=s,this.applySettingsUpdate()})});this.addSettingToMasterSettingsList(n,i.settingEl,o)}addRandomBackground(n,r){const o="Random Background",i=new ke.Setting(r).setName(o).addToggle(a=>{a.setValue(this.plugin.settings.randomBackground).onChange(async s=>{this.plugin.settings.randomBackground=s,this.applySettingsUpdate()})});this.addSettingToMasterSettingsList(n,i.settingEl,o)}addMyIcons(n,r){let o=je("Working On, Not Available Now"),i=new ke.Setting(r).setName(o);i.settingEl.classList.add("wb-theme-settings-working-on"),this.addSettingToMasterSettingsList(n,i.settingEl,"theme"),o=je("Random Icons From Default Art"),i=new ke.Setting(r).setName(o).addToggle(a=>{a.setValue(this.plugin.settings.useCustomIcons).setDisabled(!0).onChange(async s=>{this.plugin.settings.useCustomIcons=s,this.applySettingsUpdate()})}),this.addSettingToMasterSettingsList(n,i.settingEl,"theme surfing")}addBookmarkManagerSettings(n,r){const o=new ke.Setting(r).setName(je("Open BookmarkBar & Bookmark Manager")).addToggle(h=>{h.setValue(this.plugin.settings.bookmarkManager.openBookMark).onChange(async m=>{this.plugin.settings.bookmarkManager.openBookMark=m,this.applySettingsUpdate(),this.display()})});if(this.addSettingToMasterSettingsList(n,o.settingEl,je("Open BookmarkBar & Bookmark Manager")),!this.plugin.settings.bookmarkManager.openBookMark)return;const i=new ke.Setting(r).setName(je("Save Bookmark When Open URI")).addToggle(h=>{h.setValue(this.plugin.settings.bookmarkManager.saveBookMark).onChange(async m=>{this.plugin.settings.bookmarkManager.saveBookMark=m,this.applySettingsUpdate()})});this.addSettingToMasterSettingsList(n,i.settingEl,je("Save Bookmark When Open URI"));const a=new ke.Setting(r).setName(je("Send to ReadWise")).setDesc(je("Add a action in page header to Send to ReadWise.")).addToggle(h=>{h.setValue(this.plugin.settings.bookmarkManager.sendToReadWise).onChange(async m=>{this.plugin.settings.bookmarkManager.sendToReadWise=m,this.applySettingsUpdate()})});this.addSettingToMasterSettingsList(n,a.settingEl,je("Send to ReadWise"));const s=new ke.Setting(r).setName(je("Pagination")).addText(h=>h.setPlaceholder(zo.bookmarkManager.pagination).setValue(this.plugin.settings.bookmarkManager.pagination).onChange(async m=>{m===""&&(this.plugin.settings.bookmarkManager.pagination=zo.bookmarkManager.pagination,this.applySettingsUpdate(),this.display()),this.plugin.settings.bookmarkManager.pagination=m,this.applySettingsUpdate()}));this.addSettingToMasterSettingsList(n,s.settingEl,je("Pagination"));const c=new ke.Setting(r).setName(je("Category")).addTextArea(h=>{h.setPlaceholder(zo.bookmarkManager.category).setValue(this.plugin.settings.bookmarkManager.category).onChange(m=>{this.plugin.settings.bookmarkManager.category=m===""?zo.bookmarkManager.category:m,this.applySettingsUpdate()})});this.addSettingToMasterSettingsList(n,c.settingEl,je("Category"));const u=new ke.Setting(r).setName(je("Default Category (Use , to split)")).addText(h=>{h.setPlaceholder(zo.bookmarkManager.defaultCategory).setValue(this.plugin.settings.bookmarkManager.defaultCategory).onChange(m=>{this.plugin.settings.bookmarkManager.defaultCategory=m,this.applySettingsUpdate()})});this.addSettingToMasterSettingsList(n,u.settingEl,je("Default Category (Use , to split)"));const p=new ke.Setting(r).setName(je("Default Column List")).addText(h=>{h.setPlaceholder(zo.bookmarkManager.defaultColumnList.join(" ")).setValue(this.plugin.settings.bookmarkManager.defaultColumnList.join(" ")).onChange(async m=>{m===""&&(this.plugin.settings.bookmarkManager.defaultColumnList=zo.bookmarkManager.defaultColumnList,this.applySettingsUpdate(),this.display()),this.plugin.settings.bookmarkManager.defaultColumnList=m.split(" "),this.applySettingsUpdate()})});this.addSettingToMasterSettingsList(n,p.settingEl,je("Default Column List"));const v=new ke.Setting(r).setName(je("Default Category Filter Type")).addDropdown(async h=>{h.addOption("tree",je("Tree")).addOption("menu",je("Menu")).setValue(this.plugin.settings.bookmarkManager.defaultFilterType).onChange(async b=>{this.plugin.settings.bookmarkManager.defaultFilterType=b,this.applySettingsUpdate()})});this.addSettingToMasterSettingsList(n,v.settingEl,je("Default Category Filter Type"))}}const cL=(e,t)=>(e%t+t)%t;class uL{constructor(t,n,r,o){Ce(this,"owner");Ce(this,"values");Ce(this,"suggestions");Ce(this,"selectedItem");Ce(this,"containerEl");Ce(this,"app");this.owner=t,this.containerEl=n,this.app=o,n.on("click",".suggestion-item",this.onSuggestionClick.bind(this)),n.on("mousemove",".suggestion-item",this.onSuggestionMouseover.bind(this)),r.register([],"ArrowUp",s=>{if(!s.isComposing)return this.setSelectedItem(this.selectedItem-1,!0),!1}),r.register([],"ArrowDown",s=>{if(!s.isComposing)return this.setSelectedItem(this.selectedItem+1,!0),!1}),r.register([],"Enter",s=>{if(!s.isComposing)return this.useSelectedItem(s),!1});const i=this.app.plugins.getPlugin("surfing").settings,a=[...Ri,...i.customSearchEngine];for(let s=0;s{if(!c.isComposing)return this.setSelectedItem(s,!1),this.useSelectedItem(c),!1});break}r.register(["Mod"],`${s+1}`,c=>{if(!c.isComposing)return this.setSelectedItem(s,!1),this.useSelectedItem(c),!1})}}onSuggestionClick(t,n){t.preventDefault();const r=this.suggestions.indexOf(n);this.setSelectedItem(r,!1),this.useSelectedItem(t)}onSuggestionMouseover(t,n){const r=this.suggestions.indexOf(n);this.setSelectedItem(r,!1)}setSuggestions(t){this.containerEl.empty();const n=[];t.forEach((r,o)=>{const i=this.containerEl.createDiv("suggestion-item");this.owner.renderSuggestion(r,i),o<10&&i.createEl("div",{text:`${ke.Platform.isMacOS?"CMD + ":"Ctrl + "}${o!=9?o+1:0}`,cls:"wb-search-suggestion-index"}),n.push(i)}),this.values=t,this.suggestions=n,this.setSelectedItem(0,!1)}useSelectedItem(t){const n=this.values[this.selectedItem];n&&this.owner.selectSuggestion(n,t)}setSelectedItem(t,n){const r=cL(t,this.suggestions.length),o=this.suggestions[this.selectedItem],i=this.suggestions[r];o==null||o.removeClass("is-selected"),i==null||i.addClass("is-selected"),this.selectedItem=r,n&&i.scrollIntoView(!1)}}class Ry{constructor(t,n){Ce(this,"app");Ce(this,"inputEl");Ce(this,"popper");Ce(this,"scope");Ce(this,"suggestEl");Ce(this,"suggest");this.app=t,this.inputEl=n,this.scope=new ke.Scope,this.suggestEl=createDiv("wb-search-suggestion-container");const r=this.suggestEl.createDiv("wb-search-suggestion");this.suggest=new uL(this,r,this.scope,this.app),this.scope.register([],"Escape",this.close.bind(this)),this.inputEl.addEventListener("input",this.onInputChanged.bind(this)),this.inputEl.addEventListener("focus",this.onInputChanged.bind(this)),this.inputEl.addEventListener("blur",this.close.bind(this)),this.suggestEl.on("mousedown",".wb-search-suggestion-container",o=>{o.preventDefault()})}onInputChanged(){const t=this.inputEl.value,n=this.getSuggestions(t);if(!n||/^\s{0,}$/.test(t)){this.close();return}n.length>0?(this.suggest.setSuggestions(n),this.open(this.app.dom.appContainerEl,this.inputEl)):this.close()}open(t,n){this.app.keymap.pushScope(this.scope),t.appendChild(this.suggestEl),this.popper=oL(n,this.suggestEl,{placement:"bottom-start",modifiers:[{name:"sameWidth",enabled:!0,fn:({state:r,instance:o})=>{const i=`${r.rects.reference.width}px`;r.styles.popper.width!==i&&(r.styles.popper.width=i,o.update())},phase:"beforeWrite",requires:["computeStyles"]},{name:"offset",options:{offset:[0,5]}}]})}close(){this.app.keymap.popScope(this.scope),this.suggest.setSuggestions([]),this.popper&&this.popper.destroy(),this.suggestEl.detach()}}class dL{constructor(t,n,r,o){Ce(this,"plugin");Ce(this,"leaf");Ce(this,"webContents");Ce(this,"closeButtonEl");Ce(this,"backwardButtonEl");Ce(this,"forwardButtonEl");Ce(this,"inputEl");Ce(this,"searchBoxEl");Ce(this,"clicked");this.leaf=t,this.webContents=n,this.plugin=r,this.onload()}onload(){const t=this.leaf.view.contentEl;this.searchBoxEl=t.createEl("div",{cls:"wb-search-box"}),this.inputEl=this.searchBoxEl.createEl("input",{type:"text",placeholder:"",cls:"wb-search-input"});const n=this.searchBoxEl.createEl("div",{cls:"wb-search-button-group"});this.backwardButtonEl=n.createEl("div",{cls:"wb-search-button search-forward"}),this.forwardButtonEl=n.createEl("div",{cls:"wb-search-button search-backward"}),this.closeButtonEl=n.createEl("div",{cls:"wb-search-button search-close"}),this.closeButtonEl.addEventListener("click",this.unload.bind(this)),this.backwardButtonEl.addEventListener("click",this.backward.bind(this)),this.forwardButtonEl.addEventListener("click",this.forward.bind(this)),this.inputEl.addEventListener("keyup",this.search.bind(this)),this.inputEl.addEventListener("keyup",this.exist.bind(this)),ke.setIcon(this.closeButtonEl,"x"),ke.setIcon(this.backwardButtonEl,"arrow-up"),ke.setIcon(this.forwardButtonEl,"arrow-down"),this.inputEl.focus()}search(t){t.preventDefault(),this.inputEl.value!==""&&(t.key==="Enter"&&!t.shiftKey&&this.forward(),t.key==="Enter"&&t.shiftKey&&this.backward())}exist(t){t.preventDefault(),t.key==="Escape"&&this.unload()}backward(){this.inputEl.value!==""&&(this.clicked?this.webContents.findInPage(this.inputEl.value,{forward:!1,findNext:!1}):this.webContents.findInPage(this.inputEl.value,{forward:!1,findNext:!0}),this.clicked=!0)}forward(){this.inputEl.value!==""&&(this.clicked?this.webContents.findInPage(this.inputEl.value,{forward:!0,findNext:!1}):this.webContents.findInPage(this.inputEl.value,{forward:!0,findNext:!0}),this.clicked=!0)}unload(){this.webContents.stopFindInPage("clearSelection"),this.inputEl.value="",this.closeButtonEl.removeEventListener("click",this.unload),this.backwardButtonEl.removeEventListener("click",this.backward),this.forwardButtonEl.removeEventListener("click",this.forward),this.inputEl.removeEventListener("keyup",this.search),this.inputEl.removeEventListener("keyup",this.exist),this.searchBoxEl.detach()}}class fL{constructor(t){Ce(this,"content");Ce(this,"sContent");this.content=t,this.sContent=t.split(` -`)}searchLines(t,n){return t.substring(t.substring(0,n).lastIndexOf(` -`)+1,n+t.substring(n).indexOf(` -`))}search(t,n){let r="";if(!n)r=this.searchLines(this.content,t);else{r=this.searchLines(this.content,t);const o=this.sContent.findIndex(i=>i.startsWith(r));o===0?r=this.sContent.slice(0,2).filter(i=>i&&i.trim()).join("
"):r=this.sContent.slice(o-1,o+1).filter(i=>i&&i.trim()).join("
")}return r}}class pL{constructor(t,n,r,o,i){Ce(this,"parent");Ce(this,"path");Ce(this,"foundWords");Ce(this,"matches");Ce(this,"plugin");this.parent=t,this.path=n,this.foundWords=r,this.matches=o,this.plugin=i}async onload(){const t=this.parent.createEl("div",{cls:"wb-omni-item"});t.createEl("div",{cls:"wb-omni-item-path",text:this.path});const n=t.createEl("div",{cls:"wb-omni-item-content-list"}),r=this.plugin.app.vault.getAbstractFileByPath(this.path);let o="";if(r instanceof ke.TFile&&(o=await this.plugin.app.vault.cachedRead(r)),!r)return;const i=new fL(o);this.matches.length>0&&this.matches.forEach(a=>{const s=n.createEl("div",{cls:"wb-content-list-text"});s.innerHTML=i.search(a.offset,!0)})}}class vL{constructor(t,n){Ce(this,"leaf");Ce(this,"plugin");Ce(this,"wbOmniSearchCtnEl");Ce(this,"query");Ce(this,"result");this.plugin=n,this.leaf=t}onload(){this.wbOmniSearchCtnEl=this.leaf.view.contentEl.createEl("div",{cls:"wb-omni-box"}),this.hide()}hide(){this.wbOmniSearchCtnEl.isShown()&&this.wbOmniSearchCtnEl.hide()}show(){this.wbOmniSearchCtnEl.isShown()||this.wbOmniSearchCtnEl.show()}notFound(){this.wbOmniSearchCtnEl.empty(),this.wbOmniSearchCtnEl.createEl("div",{text:"No results found",cls:"wb-omni-item-notfound"})}tick(t){var n;if(this.result!==t&&(this.result=t),this.result!==t&&(this.result=t),!this.result||((n=this.result)==null?void 0:n.length)===0){this.show(),this.notFound();return}if(this.result.length>0){if(!this.result[0].foundWords.find(r=>r===this.query)){this.notFound();return}this.show(),this.result.forEach(r=>{new pL(this.wbOmniSearchCtnEl,r.path,r.foundWords,r.matches,this.plugin).onload()})}}async update(t){if(this.query===t)return;this.wbOmniSearchCtnEl.empty(),this.query=t;const n=await(omnisearch==null?void 0:omnisearch.search(this.query));this.tick(n),(!n||(n==null?void 0:n.length)===0)&&setTimeout(async()=>{const r=await omnisearch.search(this.query);this.tick(r)},3e3)}onunload(){this.wbOmniSearchCtnEl.empty(),this.wbOmniSearchCtnEl.detach()}}class hL{constructor(t,n,r,o,i){Ce(this,"parentEl");Ce(this,"plugin");Ce(this,"item");Ce(this,"view");Ce(this,"bookmark");this.parentEl=t,this.plugin=n,this.item=o,this.view=r,this.bookmark=i}onload(){typeof this.item=="object"&&(this.item.value||this.item.children)&&this.item.value!=="ROOT"?this.renderFolder():this.renderBookmark()}renderFolder(){const t=this.parentEl.createEl("div",{cls:"wb-bookmark-folder"}),n=t.createEl("div",{cls:"wb-bookmark-folder-icon"});t.createEl("div",{cls:"wb-bookmark-folder-title",text:this.item.label}),ke.setIcon(n,"folder-closed");let r;t.onclick=o=>{const i=new ke.Menu;if(!r){const a=o.target,s=a.parentElement;s.classList.contains("wb-bookmark-folder")?r=s.getBoundingClientRect():r=a.getBoundingClientRect()}this.loopMenu(i,this.item),i.showAtPosition({x:r.left,y:r.bottom})}}loopMenu(t,n){n!=null&&n.children&&(n==null||n.children.forEach(o=>{let i;if(t.addItem(a=>i=a.setTitle(o.label).setIcon("folder-closed").setSubmenu()),!(o!=null&&o.children)){const a=this.bookmark.filter(s=>s.category.length?s.category[s.category.length-1]===o.value:!1);a.length>0&&a.forEach(s=>{i==null||i.addItem(c=>{c.setIcon("surfing").setTitle(s.name).onClick(u=>{if(u.shiftKey){window.open(s.url,"_blank","external");return}!u.ctrlKey&&!u.metaKey?on.spawnWebBrowserView(this.plugin,!1,{url:s.url}):on.spawnWebBrowserView(this.plugin,!0,{url:s.url})})})})}o!=null&&o.children&&i&&this.loopMenu(i,o)}));const r=this.bookmark.filter(o=>{var i;return o.category.length?(i=o.category[o.category.length-1])==null?void 0:i.contains(n.value):!1});r.length>0&&r.forEach(o=>{t.addItem(i=>{i.setIcon("surfing").setTitle(o.name).onClick(a=>{if(a.shiftKey){window.open(o.url,"_blank","external");return}!a.ctrlKey&&!a.metaKey?on.spawnWebBrowserView(this.plugin,!1,{url:o.url}):on.spawnWebBrowserView(this.plugin,!0,{url:o.url})})})})}renderBookmark(){if(this.bookmark.length===0)return;const t=this.bookmark.filter(n=>{var r;return(n==null?void 0:n.category[0])===((r=this.item)==null?void 0:r.value)&&(n==null?void 0:n.category.length)===1});(t==null?void 0:t.length)>0&&t.forEach(n=>{const r=this.parentEl.createEl("div",{cls:"wb-bookmark-item"}),o=r.createEl("div",{cls:"wb-bookmark-item-icon"});ke.setIcon(o,"album"),r.createEl("div",{cls:"wb-bookmark-item-title",text:n.name}),r.onclick=i=>{if(i.shiftKey){window.open(n.url,"","external");return}!i.ctrlKey&&!i.metaKey?on.spawnWebBrowserView(this.plugin,!1,{url:n.url}):on.spawnWebBrowserView(this.plugin,!0,{url:n.url})}})}}const Dy=e=>`${e.app.vault.configDir}/surfing-bookmark.json`,Ua=async e=>JSON.parse(await e.app.vault.adapter.read(Dy(e))),Il=async(e,t)=>{await e.app.vault.adapter.write(Dy(e),JSON.stringify(t,null,2))},k2=async e=>{await e.app.vault.adapter.write(Dy(e),JSON.stringify({bookmarks:[{id:"2014068036",name:"Obsidian",url:"https://obsidian.md/",description:"A awesome note-taking tool",category:["ROOT"],tags:"",created:1672840861051,modified:1672840861052}],categories:[{value:"ROOT",text:"ROOT",label:"ROOT",children:[]}]},null,2))};function js(e){return e&&e.__esModule&&Object.prototype.hasOwnProperty.call(e,"default")?e.default:e}var O2={exports:{}},Yc={},$2={exports:{}},cn={},lC;function gL(){if(lC)return cn;lC=1;var e=Symbol.for("react.element"),t=Symbol.for("react.portal"),n=Symbol.for("react.fragment"),r=Symbol.for("react.strict_mode"),o=Symbol.for("react.profiler"),i=Symbol.for("react.provider"),a=Symbol.for("react.context"),s=Symbol.for("react.forward_ref"),c=Symbol.for("react.suspense"),u=Symbol.for("react.memo"),p=Symbol.for("react.lazy"),v=Symbol.iterator;function h(D){return D===null||typeof D!="object"?null:(D=v&&D[v]||D["@@iterator"],typeof D=="function"?D:null)}var m={isMounted:function(){return!1},enqueueForceUpdate:function(){},enqueueReplaceState:function(){},enqueueSetState:function(){}},b=Object.assign,y={};function w(D,W,G){this.props=D,this.context=W,this.refs=y,this.updater=G||m}w.prototype.isReactComponent={},w.prototype.setState=function(D,W){if(typeof D!="object"&&typeof D!="function"&&D!=null)throw Error("setState(...): takes an object of state variables to update or a function which returns an object of state variables.");this.updater.enqueueSetState(this,D,W,"setState")},w.prototype.forceUpdate=function(D){this.updater.enqueueForceUpdate(this,D,"forceUpdate")};function C(){}C.prototype=w.prototype;function S(D,W,G){this.props=D,this.context=W,this.refs=y,this.updater=G||m}var E=S.prototype=new C;E.constructor=S,b(E,w.prototype),E.isPureReactComponent=!0;var k=Array.isArray,O=Object.prototype.hasOwnProperty,$={current:null},T={key:!0,ref:!0,__self:!0,__source:!0};function M(D,W,G){var q,J={},Y=null,Q=null;if(W!=null)for(q in W.ref!==void 0&&(Q=W.ref),W.key!==void 0&&(Y=""+W.key),W)O.call(W,q)&&!T.hasOwnProperty(q)&&(J[q]=W[q]);var te=arguments.length-2;if(te===1)J.children=G;else if(1>>1,W=L[D];if(0>>1;Do(J,U))Yo(Q,J)?(L[D]=Q,L[Y]=U,D=Y):(L[D]=J,L[q]=U,D=q);else if(Yo(Q,U))L[D]=Q,L[Y]=U,D=Y;else break e}}return F}function o(L,F){var U=L.sortIndex-F.sortIndex;return U!==0?U:L.id-F.id}if(typeof performance=="object"&&typeof performance.now=="function"){var i=performance;e.unstable_now=function(){return i.now()}}else{var a=Date,s=a.now();e.unstable_now=function(){return a.now()-s}}var c=[],u=[],p=1,v=null,h=3,m=!1,b=!1,y=!1,w=typeof setTimeout=="function"?setTimeout:null,C=typeof clearTimeout=="function"?clearTimeout:null,S=typeof setImmediate<"u"?setImmediate:null;typeof navigator<"u"&&navigator.scheduling!==void 0&&navigator.scheduling.isInputPending!==void 0&&navigator.scheduling.isInputPending.bind(navigator.scheduling);function E(L){for(var F=n(u);F!==null;){if(F.callback===null)r(u);else if(F.startTime<=L)r(u),F.sortIndex=F.expirationTime,t(c,F);else break;F=n(u)}}function k(L){if(y=!1,E(L),!b)if(n(c)!==null)b=!0,H(O);else{var F=n(u);F!==null&&j(k,F.startTime-L)}}function O(L,F){b=!1,y&&(y=!1,C(M),M=-1),m=!0;var U=h;try{for(E(F),v=n(c);v!==null&&(!(v.expirationTime>F)||L&&!A());){var D=v.callback;if(typeof D=="function"){v.callback=null,h=v.priorityLevel;var W=D(v.expirationTime<=F);F=e.unstable_now(),typeof W=="function"?v.callback=W:v===n(c)&&r(c),E(F)}else r(c);v=n(c)}if(v!==null)var G=!0;else{var q=n(u);q!==null&&j(k,q.startTime-F),G=!1}return G}finally{v=null,h=U,m=!1}}var $=!1,T=null,M=-1,P=5,R=-1;function A(){return!(e.unstable_now()-RL||125D?(L.sortIndex=U,t(u,L),n(c)===null&&L===n(u)&&(y?(C(M),M=-1):y=!0,j(k,U-D))):(L.sortIndex=W,t(c,L),b||m||(b=!0,H(O))),L},e.unstable_shouldYield=A,e.unstable_wrapCallback=function(L){var F=h;return function(){var U=h;h=F;try{return L.apply(this,arguments)}finally{h=U}}}}(Jg)),Jg}var dC;function yL(){return dC||(dC=1,Zg.exports=bL()),Zg.exports}var fC;function wL(){if(fC)return oo;fC=1;var e=d,t=yL();function n(l){for(var f="https://reactjs.org/docs/error-decoder.html?invariant="+l,g=1;g"u"||typeof window.document>"u"||typeof window.document.createElement>"u"),c=Object.prototype.hasOwnProperty,u=/^[:A-Z_a-z\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u02FF\u0370-\u037D\u037F-\u1FFF\u200C-\u200D\u2070-\u218F\u2C00-\u2FEF\u3001-\uD7FF\uF900-\uFDCF\uFDF0-\uFFFD][:A-Z_a-z\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u02FF\u0370-\u037D\u037F-\u1FFF\u200C-\u200D\u2070-\u218F\u2C00-\u2FEF\u3001-\uD7FF\uF900-\uFDCF\uFDF0-\uFFFD\-.0-9\u00B7\u0300-\u036F\u203F-\u2040]*$/,p={},v={};function h(l){return c.call(v,l)?!0:c.call(p,l)?!1:u.test(l)?v[l]=!0:(p[l]=!0,!1)}function m(l,f,g,x){if(g!==null&&g.type===0)return!1;switch(typeof f){case"function":case"symbol":return!0;case"boolean":return x?!1:g!==null?!g.acceptsBooleans:(l=l.toLowerCase().slice(0,5),l!=="data-"&&l!=="aria-");default:return!1}}function b(l,f,g,x){if(f===null||typeof f>"u"||m(l,f,g,x))return!0;if(x)return!1;if(g!==null)switch(g.type){case 3:return!f;case 4:return f===!1;case 5:return isNaN(f);case 6:return isNaN(f)||1>f}return!1}function y(l,f,g,x,I,N,X){this.acceptsBooleans=f===2||f===3||f===4,this.attributeName=x,this.attributeNamespace=I,this.mustUseProperty=g,this.propertyName=l,this.type=f,this.sanitizeURL=N,this.removeEmptyString=X}var w={};"children dangerouslySetInnerHTML defaultValue defaultChecked innerHTML suppressContentEditableWarning suppressHydrationWarning style".split(" ").forEach(function(l){w[l]=new y(l,0,!1,l,null,!1,!1)}),[["acceptCharset","accept-charset"],["className","class"],["htmlFor","for"],["httpEquiv","http-equiv"]].forEach(function(l){var f=l[0];w[f]=new y(f,1,!1,l[1],null,!1,!1)}),["contentEditable","draggable","spellCheck","value"].forEach(function(l){w[l]=new y(l,2,!1,l.toLowerCase(),null,!1,!1)}),["autoReverse","externalResourcesRequired","focusable","preserveAlpha"].forEach(function(l){w[l]=new y(l,2,!1,l,null,!1,!1)}),"allowFullScreen async autoFocus autoPlay controls default defer disabled disablePictureInPicture disableRemotePlayback formNoValidate hidden loop noModule noValidate open playsInline readOnly required reversed scoped seamless itemScope".split(" ").forEach(function(l){w[l]=new y(l,3,!1,l.toLowerCase(),null,!1,!1)}),["checked","multiple","muted","selected"].forEach(function(l){w[l]=new y(l,3,!0,l,null,!1,!1)}),["capture","download"].forEach(function(l){w[l]=new y(l,4,!1,l,null,!1,!1)}),["cols","rows","size","span"].forEach(function(l){w[l]=new y(l,6,!1,l,null,!1,!1)}),["rowSpan","start"].forEach(function(l){w[l]=new y(l,5,!1,l.toLowerCase(),null,!1,!1)});var C=/[\-:]([a-z])/g;function S(l){return l[1].toUpperCase()}"accent-height alignment-baseline arabic-form baseline-shift cap-height clip-path clip-rule color-interpolation color-interpolation-filters color-profile color-rendering dominant-baseline enable-background fill-opacity fill-rule flood-color flood-opacity font-family font-size font-size-adjust font-stretch font-style font-variant font-weight glyph-name glyph-orientation-horizontal glyph-orientation-vertical horiz-adv-x horiz-origin-x image-rendering letter-spacing lighting-color marker-end marker-mid marker-start overline-position overline-thickness paint-order panose-1 pointer-events rendering-intent shape-rendering stop-color stop-opacity strikethrough-position strikethrough-thickness stroke-dasharray stroke-dashoffset stroke-linecap stroke-linejoin stroke-miterlimit stroke-opacity stroke-width text-anchor text-decoration text-rendering underline-position underline-thickness unicode-bidi unicode-range units-per-em v-alphabetic v-hanging v-ideographic v-mathematical vector-effect vert-adv-y vert-origin-x vert-origin-y word-spacing writing-mode xmlns:xlink x-height".split(" ").forEach(function(l){var f=l.replace(C,S);w[f]=new y(f,1,!1,l,null,!1,!1)}),"xlink:actuate xlink:arcrole xlink:role xlink:show xlink:title xlink:type".split(" ").forEach(function(l){var f=l.replace(C,S);w[f]=new y(f,1,!1,l,"http://www.w3.org/1999/xlink",!1,!1)}),["xml:base","xml:lang","xml:space"].forEach(function(l){var f=l.replace(C,S);w[f]=new y(f,1,!1,l,"http://www.w3.org/XML/1998/namespace",!1,!1)}),["tabIndex","crossOrigin"].forEach(function(l){w[l]=new y(l,1,!1,l.toLowerCase(),null,!1,!1)}),w.xlinkHref=new y("xlinkHref",1,!1,"xlink:href","http://www.w3.org/1999/xlink",!0,!1),["src","href","action","formAction"].forEach(function(l){w[l]=new y(l,1,!1,l.toLowerCase(),null,!0,!0)});function E(l,f,g,x){var I=w.hasOwnProperty(f)?w[f]:null;(I!==null?I.type!==0:x||!(2oe||I[X]!==N[oe]){var fe=` -`+I[X].replace(" at new "," at ");return l.displayName&&fe.includes("")&&(fe=fe.replace("",l.displayName)),fe}while(1<=X&&0<=oe);break}}}finally{G=!1,Error.prepareStackTrace=g}return(l=l?l.displayName||l.name:"")?W(l):""}function J(l){switch(l.tag){case 5:return W(l.type);case 16:return W("Lazy");case 13:return W("Suspense");case 19:return W("SuspenseList");case 0:case 2:case 15:return l=q(l.type,!1),l;case 11:return l=q(l.type.render,!1),l;case 1:return l=q(l.type,!0),l;default:return""}}function Y(l){if(l==null)return null;if(typeof l=="function")return l.displayName||l.name||null;if(typeof l=="string")return l;switch(l){case T:return"Fragment";case $:return"Portal";case P:return"Profiler";case M:return"StrictMode";case z:return"Suspense";case B:return"SuspenseList"}if(typeof l=="object")switch(l.$$typeof){case A:return(l.displayName||"Context")+".Consumer";case R:return(l._context.displayName||"Context")+".Provider";case V:var f=l.render;return l=l.displayName,l||(l=f.displayName||f.name||"",l=l!==""?"ForwardRef("+l+")":"ForwardRef"),l;case _:return f=l.displayName||null,f!==null?f:Y(l.type)||"Memo";case H:f=l._payload,l=l._init;try{return Y(l(f))}catch{}}return null}function Q(l){var f=l.type;switch(l.tag){case 24:return"Cache";case 9:return(f.displayName||"Context")+".Consumer";case 10:return(f._context.displayName||"Context")+".Provider";case 18:return"DehydratedFragment";case 11:return l=f.render,l=l.displayName||l.name||"",f.displayName||(l!==""?"ForwardRef("+l+")":"ForwardRef");case 7:return"Fragment";case 5:return f;case 4:return"Portal";case 3:return"Root";case 6:return"Text";case 16:return Y(f);case 8:return f===M?"StrictMode":"Mode";case 22:return"Offscreen";case 12:return"Profiler";case 21:return"Scope";case 13:return"Suspense";case 19:return"SuspenseList";case 25:return"TracingMarker";case 1:case 0:case 17:case 2:case 14:case 15:if(typeof f=="function")return f.displayName||f.name||null;if(typeof f=="string")return f}return null}function te(l){switch(typeof l){case"boolean":case"number":case"string":case"undefined":return l;case"object":return l;default:return""}}function ce(l){var f=l.type;return(l=l.nodeName)&&l.toLowerCase()==="input"&&(f==="checkbox"||f==="radio")}function se(l){var f=ce(l)?"checked":"value",g=Object.getOwnPropertyDescriptor(l.constructor.prototype,f),x=""+l[f];if(!l.hasOwnProperty(f)&&typeof g<"u"&&typeof g.get=="function"&&typeof g.set=="function"){var I=g.get,N=g.set;return Object.defineProperty(l,f,{configurable:!0,get:function(){return I.call(this)},set:function(X){x=""+X,N.call(this,X)}}),Object.defineProperty(l,f,{enumerable:g.enumerable}),{getValue:function(){return x},setValue:function(X){x=""+X},stopTracking:function(){l._valueTracker=null,delete l[f]}}}}function ne(l){l._valueTracker||(l._valueTracker=se(l))}function ae(l){if(!l)return!1;var f=l._valueTracker;if(!f)return!0;var g=f.getValue(),x="";return l&&(x=ce(l)?l.checked?"true":"false":l.value),l=x,l!==g?(f.setValue(l),!0):!1}function ee(l){if(l=l||(typeof document<"u"?document:void 0),typeof l>"u")return null;try{return l.activeElement||l.body}catch{return l.body}}function re(l,f){var g=f.checked;return U({},f,{defaultChecked:void 0,defaultValue:void 0,value:void 0,checked:g??l._wrapperState.initialChecked})}function le(l,f){var g=f.defaultValue==null?"":f.defaultValue,x=f.checked!=null?f.checked:f.defaultChecked;g=te(f.value!=null?f.value:g),l._wrapperState={initialChecked:x,initialValue:g,controlled:f.type==="checkbox"||f.type==="radio"?f.checked!=null:f.value!=null}}function pe(l,f){f=f.checked,f!=null&&E(l,"checked",f,!1)}function Oe(l,f){pe(l,f);var g=te(f.value),x=f.type;if(g!=null)x==="number"?(g===0&&l.value===""||l.value!=g)&&(l.value=""+g):l.value!==""+g&&(l.value=""+g);else if(x==="submit"||x==="reset"){l.removeAttribute("value");return}f.hasOwnProperty("value")?Re(l,f.type,g):f.hasOwnProperty("defaultValue")&&Re(l,f.type,te(f.defaultValue)),f.checked==null&&f.defaultChecked!=null&&(l.defaultChecked=!!f.defaultChecked)}function ge(l,f,g){if(f.hasOwnProperty("value")||f.hasOwnProperty("defaultValue")){var x=f.type;if(!(x!=="submit"&&x!=="reset"||f.value!==void 0&&f.value!==null))return;f=""+l._wrapperState.initialValue,g||f===l.value||(l.value=f),l.defaultValue=f}g=l.name,g!==""&&(l.name=""),l.defaultChecked=!!l._wrapperState.initialChecked,g!==""&&(l.name=g)}function Re(l,f,g){(f!=="number"||ee(l.ownerDocument)!==l)&&(g==null?l.defaultValue=""+l._wrapperState.initialValue:l.defaultValue!==""+g&&(l.defaultValue=""+g))}var ye=Array.isArray;function Te(l,f,g,x){if(l=l.options,f){f={};for(var I=0;I"+f.valueOf().toString()+"",f=rt.firstChild;l.firstChild;)l.removeChild(l.firstChild);for(;f.firstChild;)l.appendChild(f.firstChild)}});function Ve(l,f){if(f){var g=l.firstChild;if(g&&g===l.lastChild&&g.nodeType===3){g.nodeValue=f;return}}l.textContent=f}var Ye={animationIterationCount:!0,aspectRatio:!0,borderImageOutset:!0,borderImageSlice:!0,borderImageWidth:!0,boxFlex:!0,boxFlexGroup:!0,boxOrdinalGroup:!0,columnCount:!0,columns:!0,flex:!0,flexGrow:!0,flexPositive:!0,flexShrink:!0,flexNegative:!0,flexOrder:!0,gridArea:!0,gridRow:!0,gridRowEnd:!0,gridRowSpan:!0,gridRowStart:!0,gridColumn:!0,gridColumnEnd:!0,gridColumnSpan:!0,gridColumnStart:!0,fontWeight:!0,lineClamp:!0,lineHeight:!0,opacity:!0,order:!0,orphans:!0,tabSize:!0,widows:!0,zIndex:!0,zoom:!0,fillOpacity:!0,floodOpacity:!0,stopOpacity:!0,strokeDasharray:!0,strokeDashoffset:!0,strokeMiterlimit:!0,strokeOpacity:!0,strokeWidth:!0},Ge=["Webkit","ms","Moz","O"];Object.keys(Ye).forEach(function(l){Ge.forEach(function(f){f=f+l.charAt(0).toUpperCase()+l.substring(1),Ye[f]=Ye[l]})});function Fe(l,f,g){return f==null||typeof f=="boolean"||f===""?"":g||typeof f!="number"||f===0||Ye.hasOwnProperty(l)&&Ye[l]?(""+f).trim():f+"px"}function we(l,f){l=l.style;for(var g in f)if(f.hasOwnProperty(g)){var x=g.indexOf("--")===0,I=Fe(g,f[g],x);g==="float"&&(g="cssFloat"),x?l.setProperty(g,I):l[g]=I}}var ze=U({menuitem:!0},{area:!0,base:!0,br:!0,col:!0,embed:!0,hr:!0,img:!0,input:!0,keygen:!0,link:!0,meta:!0,param:!0,source:!0,track:!0,wbr:!0});function Me(l,f){if(f){if(ze[l]&&(f.children!=null||f.dangerouslySetInnerHTML!=null))throw Error(n(137,l));if(f.dangerouslySetInnerHTML!=null){if(f.children!=null)throw Error(n(60));if(typeof f.dangerouslySetInnerHTML!="object"||!("__html"in f.dangerouslySetInnerHTML))throw Error(n(61))}if(f.style!=null&&typeof f.style!="object")throw Error(n(62))}}function Pe(l,f){if(l.indexOf("-")===-1)return typeof f.is=="string";switch(l){case"annotation-xml":case"color-profile":case"font-face":case"font-face-src":case"font-face-uri":case"font-face-format":case"font-face-name":case"missing-glyph":return!1;default:return!0}}var Ke=null;function St(l){return l=l.target||l.srcElement||window,l.correspondingUseElement&&(l=l.correspondingUseElement),l.nodeType===3?l.parentNode:l}var Ft=null,Lt=null,Ct=null;function Xt(l){if(l=Dc(l)){if(typeof Ft!="function")throw Error(n(280));var f=l.stateNode;f&&(f=Jd(f),Ft(l.stateNode,l.type,f))}}function Pt(l){Lt?Ct?Ct.push(l):Ct=[l]:Lt=l}function Gt(){if(Lt){var l=Lt,f=Ct;if(Ct=Lt=null,Xt(l),f)for(l=0;l>>=0,l===0?32:31-(fr(l)/pr|0)|0}var vr=64,Tn=4194304;function Vt(l){switch(l&-l){case 1:return 1;case 2:return 2;case 4:return 4;case 8:return 8;case 16:return 16;case 32:return 32;case 64:case 128:case 256:case 512:case 1024:case 2048:case 4096:case 8192:case 16384:case 32768:case 65536:case 131072:case 262144:case 524288:case 1048576:case 2097152:return l&4194240;case 4194304:case 8388608:case 16777216:case 33554432:case 67108864:return l&130023424;case 134217728:return 134217728;case 268435456:return 268435456;case 536870912:return 536870912;case 1073741824:return 1073741824;default:return l}}function ct(l,f){var g=l.pendingLanes;if(g===0)return 0;var x=0,I=l.suspendedLanes,N=l.pingedLanes,X=g&268435455;if(X!==0){var oe=X&~I;oe!==0?x=Vt(oe):(N&=X,N!==0&&(x=Vt(N)))}else X=g&~I,X!==0?x=Vt(X):N!==0&&(x=Vt(N));if(x===0)return 0;if(f!==0&&f!==x&&!(f&I)&&(I=x&-x,N=f&-f,I>=N||I===16&&(N&4194240)!==0))return f;if(x&4&&(x|=g&16),f=l.entangledLanes,f!==0)for(l=l.entanglements,f&=x;0g;g++)f.push(l);return f}function Cn(l,f,g){l.pendingLanes|=f,f!==536870912&&(l.suspendedLanes=0,l.pingedLanes=0),l=l.eventTimes,f=31-Gn(f),l[f]=g}function hr(l,f){var g=l.pendingLanes&~f;l.pendingLanes=f,l.suspendedLanes=0,l.pingedLanes=0,l.expiredLanes&=f,l.mutableReadLanes&=f,l.entangledLanes&=f,f=l.entanglements;var x=l.eventTimes;for(l=l.expirationTimes;0=kc),U1=" ",K1=!1;function q1(l,f){switch(l){case"keyup":return S3.indexOf(f.keyCode)!==-1;case"keydown":return f.keyCode!==229;case"keypress":case"mousedown":case"focusout":return!0;default:return!1}}function X1(l){return l=l.detail,typeof l=="object"&&"data"in l?l.data:null}var _s=!1;function E3(l,f){switch(l){case"compositionend":return X1(f);case"keypress":return f.which!==32?null:(K1=!0,U1);case"textInput":return l=f.data,l===U1&&K1?null:l;default:return null}}function k3(l,f){if(_s)return l==="compositionend"||!Nh&&q1(l,f)?(l=z1(),_d=Oh=fa=null,_s=!1,l):null;switch(l){case"paste":return null;case"keypress":if(!(f.ctrlKey||f.altKey||f.metaKey)||f.ctrlKey&&f.altKey){if(f.char&&1=f)return{node:g,offset:f-l};l=x}e:{for(;g;){if(g.nextSibling){g=g.nextSibling;break e}g=g.parentNode}g=void 0}g=tx(g)}}function rx(l,f){return l&&f?l===f?!0:l&&l.nodeType===3?!1:f&&f.nodeType===3?rx(l,f.parentNode):"contains"in l?l.contains(f):l.compareDocumentPosition?!!(l.compareDocumentPosition(f)&16):!1:!1}function ox(){for(var l=window,f=ee();f instanceof l.HTMLIFrameElement;){try{var g=typeof f.contentWindow.location.href=="string"}catch{g=!1}if(g)l=f.contentWindow;else break;f=ee(l.document)}return f}function jh(l){var f=l&&l.nodeName&&l.nodeName.toLowerCase();return f&&(f==="input"&&(l.type==="text"||l.type==="search"||l.type==="tel"||l.type==="url"||l.type==="password")||f==="textarea"||l.contentEditable==="true")}function D3(l){var f=ox(),g=l.focusedElem,x=l.selectionRange;if(f!==g&&g&&g.ownerDocument&&rx(g.ownerDocument.documentElement,g)){if(x!==null&&jh(g)){if(f=x.start,l=x.end,l===void 0&&(l=f),"selectionStart"in g)g.selectionStart=f,g.selectionEnd=Math.min(l,g.value.length);else if(l=(f=g.ownerDocument||document)&&f.defaultView||window,l.getSelection){l=l.getSelection();var I=g.textContent.length,N=Math.min(x.start,I);x=x.end===void 0?N:Math.min(x.end,I),!l.extend&&N>x&&(I=x,x=N,N=I),I=nx(g,N);var X=nx(g,x);I&&X&&(l.rangeCount!==1||l.anchorNode!==I.node||l.anchorOffset!==I.offset||l.focusNode!==X.node||l.focusOffset!==X.offset)&&(f=f.createRange(),f.setStart(I.node,I.offset),l.removeAllRanges(),N>x?(l.addRange(f),l.extend(X.node,X.offset)):(f.setEnd(X.node,X.offset),l.addRange(f)))}}for(f=[],l=g;l=l.parentNode;)l.nodeType===1&&f.push({element:l,left:l.scrollLeft,top:l.scrollTop});for(typeof g.focus=="function"&&g.focus(),g=0;g=document.documentMode,Vs=null,Lh=null,Tc=null,Bh=!1;function ix(l,f,g){var x=g.window===g?g.document:g.nodeType===9?g:g.ownerDocument;Bh||Vs==null||Vs!==ee(x)||(x=Vs,"selectionStart"in x&&jh(x)?x={start:x.selectionStart,end:x.selectionEnd}:(x=(x.ownerDocument&&x.ownerDocument.defaultView||window).getSelection(),x={anchorNode:x.anchorNode,anchorOffset:x.anchorOffset,focusNode:x.focusNode,focusOffset:x.focusOffset}),Tc&&Ic(Tc,x)||(Tc=x,x=Yd(Lh,"onSelect"),0Xs||(l.current=Gh[Xs],Gh[Xs]=null,Xs--)}function Nn(l,f){Xs++,Gh[Xs]=l.current,l.current=f}var ga={},Br=ha(ga),Jr=ha(!1),ts=ga;function Gs(l,f){var g=l.type.contextTypes;if(!g)return ga;var x=l.stateNode;if(x&&x.__reactInternalMemoizedUnmaskedChildContext===f)return x.__reactInternalMemoizedMaskedChildContext;var I={},N;for(N in g)I[N]=f[N];return x&&(l=l.stateNode,l.__reactInternalMemoizedUnmaskedChildContext=f,l.__reactInternalMemoizedMaskedChildContext=I),I}function eo(l){return l=l.childContextTypes,l!=null}function ef(){Hn(Jr),Hn(Br)}function wx(l,f,g){if(Br.current!==ga)throw Error(n(168));Nn(Br,f),Nn(Jr,g)}function xx(l,f,g){var x=l.stateNode;if(f=f.childContextTypes,typeof x.getChildContext!="function")return g;x=x.getChildContext();for(var I in x)if(!(I in f))throw Error(n(108,Q(l)||"Unknown",I));return U({},g,x)}function tf(l){return l=(l=l.stateNode)&&l.__reactInternalMemoizedMergedChildContext||ga,ts=Br.current,Nn(Br,l),Nn(Jr,Jr.current),!0}function Sx(l,f,g){var x=l.stateNode;if(!x)throw Error(n(169));g?(l=xx(l,f,ts),x.__reactInternalMemoizedMergedChildContext=l,Hn(Jr),Hn(Br),Nn(Br,l)):Hn(Jr),Nn(Jr,g)}var _i=null,nf=!1,Yh=!1;function Cx(l){_i===null?_i=[l]:_i.push(l)}function K3(l){nf=!0,Cx(l)}function ma(){if(!Yh&&_i!==null){Yh=!0;var l=0,f=Wt;try{var g=_i;for(Wt=1;l>=X,I-=X,Vi=1<<32-Gn(f)+I|g<qt?(Er=jt,jt=null):Er=jt.sibling;var yn=Ue(be,jt,xe[qt],tt);if(yn===null){jt===null&&(jt=Er);break}l&&jt&&yn.alternate===null&&f(be,jt),he=N(yn,he,qt),Dt===null?Tt=yn:Dt.sibling=yn,Dt=yn,jt=Er}if(qt===xe.length)return g(be,jt),Wn&&rs(be,qt),Tt;if(jt===null){for(;qtqt?(Er=jt,jt=null):Er=jt.sibling;var Oa=Ue(be,jt,yn.value,tt);if(Oa===null){jt===null&&(jt=Er);break}l&&jt&&Oa.alternate===null&&f(be,jt),he=N(Oa,he,qt),Dt===null?Tt=Oa:Dt.sibling=Oa,Dt=Oa,jt=Er}if(yn.done)return g(be,jt),Wn&&rs(be,qt),Tt;if(jt===null){for(;!yn.done;qt++,yn=xe.next())yn=Qe(be,yn.value,tt),yn!==null&&(he=N(yn,he,qt),Dt===null?Tt=yn:Dt.sibling=yn,Dt=yn);return Wn&&rs(be,qt),Tt}for(jt=x(be,jt);!yn.done;qt++,yn=xe.next())yn=vt(jt,be,qt,yn.value,tt),yn!==null&&(l&&yn.alternate!==null&&jt.delete(yn.key===null?qt:yn.key),he=N(yn,he,qt),Dt===null?Tt=yn:Dt.sibling=yn,Dt=yn);return l&&jt.forEach(function(OD){return f(be,OD)}),Wn&&rs(be,qt),Tt}function nr(be,he,xe,tt){if(typeof xe=="object"&&xe!==null&&xe.type===T&&xe.key===null&&(xe=xe.props.children),typeof xe=="object"&&xe!==null){switch(xe.$$typeof){case O:e:{for(var Tt=xe.key,Dt=he;Dt!==null;){if(Dt.key===Tt){if(Tt=xe.type,Tt===T){if(Dt.tag===7){g(be,Dt.sibling),he=I(Dt,xe.props.children),he.return=be,be=he;break e}}else if(Dt.elementType===Tt||typeof Tt=="object"&&Tt!==null&&Tt.$$typeof===H&&Tx(Tt)===Dt.type){g(be,Dt.sibling),he=I(Dt,xe.props),he.ref=jc(be,Dt,xe),he.return=be,be=he;break e}g(be,Dt);break}else f(be,Dt);Dt=Dt.sibling}xe.type===T?(he=ds(xe.props.children,be.mode,tt,xe.key),he.return=be,be=he):(tt=Mf(xe.type,xe.key,xe.props,null,be.mode,tt),tt.ref=jc(be,he,xe),tt.return=be,be=tt)}return X(be);case $:e:{for(Dt=xe.key;he!==null;){if(he.key===Dt)if(he.tag===4&&he.stateNode.containerInfo===xe.containerInfo&&he.stateNode.implementation===xe.implementation){g(be,he.sibling),he=I(he,xe.children||[]),he.return=be,be=he;break e}else{g(be,he);break}else f(be,he);he=he.sibling}he=qg(xe,be.mode,tt),he.return=be,be=he}return X(be);case H:return Dt=xe._init,nr(be,he,Dt(xe._payload),tt)}if(ye(xe))return Ot(be,he,xe,tt);if(F(xe))return It(be,he,xe,tt);sf(be,xe)}return typeof xe=="string"&&xe!==""||typeof xe=="number"?(xe=""+xe,he!==null&&he.tag===6?(g(be,he.sibling),he=I(he,xe),he.return=be,be=he):(g(be,he),he=Kg(xe,be.mode,tt),he.return=be,be=he),X(be)):g(be,he)}return nr}var Js=Px(!0),Mx=Px(!1),lf=ha(null),cf=null,el=null,ng=null;function rg(){ng=el=cf=null}function og(l){var f=lf.current;Hn(lf),l._currentValue=f}function ig(l,f,g){for(;l!==null;){var x=l.alternate;if((l.childLanes&f)!==f?(l.childLanes|=f,x!==null&&(x.childLanes|=f)):x!==null&&(x.childLanes&f)!==f&&(x.childLanes|=f),l===g)break;l=l.return}}function tl(l,f){cf=l,ng=el=null,l=l.dependencies,l!==null&&l.firstContext!==null&&(l.lanes&f&&(to=!0),l.firstContext=null)}function Io(l){var f=l._currentValue;if(ng!==l)if(l={context:l,memoizedValue:f,next:null},el===null){if(cf===null)throw Error(n(308));el=l,cf.dependencies={lanes:0,firstContext:l}}else el=el.next=l;return f}var os=null;function ag(l){os===null?os=[l]:os.push(l)}function Nx(l,f,g,x){var I=f.interleaved;return I===null?(g.next=g,ag(f)):(g.next=I.next,I.next=g),f.interleaved=g,Ui(l,x)}function Ui(l,f){l.lanes|=f;var g=l.alternate;for(g!==null&&(g.lanes|=f),g=l,l=l.return;l!==null;)l.childLanes|=f,g=l.alternate,g!==null&&(g.childLanes|=f),g=l,l=l.return;return g.tag===3?g.stateNode:null}var ba=!1;function sg(l){l.updateQueue={baseState:l.memoizedState,firstBaseUpdate:null,lastBaseUpdate:null,shared:{pending:null,interleaved:null,lanes:0},effects:null}}function Rx(l,f){l=l.updateQueue,f.updateQueue===l&&(f.updateQueue={baseState:l.baseState,firstBaseUpdate:l.firstBaseUpdate,lastBaseUpdate:l.lastBaseUpdate,shared:l.shared,effects:l.effects})}function Ki(l,f){return{eventTime:l,lane:f,tag:0,payload:null,callback:null,next:null}}function ya(l,f,g){var x=l.updateQueue;if(x===null)return null;if(x=x.shared,mn&2){var I=x.pending;return I===null?f.next=f:(f.next=I.next,I.next=f),x.pending=f,Ui(l,g)}return I=x.interleaved,I===null?(f.next=f,ag(x)):(f.next=I.next,I.next=f),x.interleaved=f,Ui(l,g)}function uf(l,f,g){if(f=f.updateQueue,f!==null&&(f=f.shared,(g&4194240)!==0)){var x=f.lanes;x&=l.pendingLanes,g|=x,f.lanes=g,ir(l,g)}}function Dx(l,f){var g=l.updateQueue,x=l.alternate;if(x!==null&&(x=x.updateQueue,g===x)){var I=null,N=null;if(g=g.firstBaseUpdate,g!==null){do{var X={eventTime:g.eventTime,lane:g.lane,tag:g.tag,payload:g.payload,callback:g.callback,next:null};N===null?I=N=X:N=N.next=X,g=g.next}while(g!==null);N===null?I=N=f:N=N.next=f}else I=N=f;g={baseState:x.baseState,firstBaseUpdate:I,lastBaseUpdate:N,shared:x.shared,effects:x.effects},l.updateQueue=g;return}l=g.lastBaseUpdate,l===null?g.firstBaseUpdate=f:l.next=f,g.lastBaseUpdate=f}function df(l,f,g,x){var I=l.updateQueue;ba=!1;var N=I.firstBaseUpdate,X=I.lastBaseUpdate,oe=I.shared.pending;if(oe!==null){I.shared.pending=null;var fe=oe,Ee=fe.next;fe.next=null,X===null?N=Ee:X.next=Ee,X=fe;var Xe=l.alternate;Xe!==null&&(Xe=Xe.updateQueue,oe=Xe.lastBaseUpdate,oe!==X&&(oe===null?Xe.firstBaseUpdate=Ee:oe.next=Ee,Xe.lastBaseUpdate=fe))}if(N!==null){var Qe=I.baseState;X=0,Xe=Ee=fe=null,oe=N;do{var Ue=oe.lane,vt=oe.eventTime;if((x&Ue)===Ue){Xe!==null&&(Xe=Xe.next={eventTime:vt,lane:0,tag:oe.tag,payload:oe.payload,callback:oe.callback,next:null});e:{var Ot=l,It=oe;switch(Ue=f,vt=g,It.tag){case 1:if(Ot=It.payload,typeof Ot=="function"){Qe=Ot.call(vt,Qe,Ue);break e}Qe=Ot;break e;case 3:Ot.flags=Ot.flags&-65537|128;case 0:if(Ot=It.payload,Ue=typeof Ot=="function"?Ot.call(vt,Qe,Ue):Ot,Ue==null)break e;Qe=U({},Qe,Ue);break e;case 2:ba=!0}}oe.callback!==null&&oe.lane!==0&&(l.flags|=64,Ue=I.effects,Ue===null?I.effects=[oe]:Ue.push(oe))}else vt={eventTime:vt,lane:Ue,tag:oe.tag,payload:oe.payload,callback:oe.callback,next:null},Xe===null?(Ee=Xe=vt,fe=Qe):Xe=Xe.next=vt,X|=Ue;if(oe=oe.next,oe===null){if(oe=I.shared.pending,oe===null)break;Ue=oe,oe=Ue.next,Ue.next=null,I.lastBaseUpdate=Ue,I.shared.pending=null}}while(!0);if(Xe===null&&(fe=Qe),I.baseState=fe,I.firstBaseUpdate=Ee,I.lastBaseUpdate=Xe,f=I.shared.interleaved,f!==null){I=f;do X|=I.lane,I=I.next;while(I!==f)}else N===null&&(I.shared.lanes=0);ss|=X,l.lanes=X,l.memoizedState=Qe}}function jx(l,f,g){if(l=f.effects,f.effects=null,l!==null)for(f=0;fg?g:4,l(!0);var x=fg.transition;fg.transition={};try{l(!1),f()}finally{Wt=g,fg.transition=x}}function eS(){return To().memoizedState}function Y3(l,f,g){var x=Ca(l);if(g={lane:x,action:g,hasEagerState:!1,eagerState:null,next:null},tS(l))nS(f,g);else if(g=Nx(l,f,g,x),g!==null){var I=Kr();ni(g,l,x,I),rS(g,f,x)}}function Q3(l,f,g){var x=Ca(l),I={lane:x,action:g,hasEagerState:!1,eagerState:null,next:null};if(tS(l))nS(f,I);else{var N=l.alternate;if(l.lanes===0&&(N===null||N.lanes===0)&&(N=f.lastRenderedReducer,N!==null))try{var X=f.lastRenderedState,oe=N(X,g);if(I.hasEagerState=!0,I.eagerState=oe,Qo(oe,X)){var fe=f.interleaved;fe===null?(I.next=I,ag(f)):(I.next=fe.next,fe.next=I),f.interleaved=I;return}}catch{}g=Nx(l,f,I,x),g!==null&&(I=Kr(),ni(g,l,x,I),rS(g,f,x))}}function tS(l){var f=l.alternate;return l===Qn||f!==null&&f===Qn}function nS(l,f){zc=vf=!0;var g=l.pending;g===null?f.next=f:(f.next=g.next,g.next=f),l.pending=f}function rS(l,f,g){if(g&4194240){var x=f.lanes;x&=l.pendingLanes,g|=x,f.lanes=g,ir(l,g)}}var mf={readContext:Io,useCallback:Ar,useContext:Ar,useEffect:Ar,useImperativeHandle:Ar,useInsertionEffect:Ar,useLayoutEffect:Ar,useMemo:Ar,useReducer:Ar,useRef:Ar,useState:Ar,useDebugValue:Ar,useDeferredValue:Ar,useTransition:Ar,useMutableSource:Ar,useSyncExternalStore:Ar,useId:Ar,unstable_isNewReconciler:!1},Z3={readContext:Io,useCallback:function(l,f){return Ei().memoizedState=[l,f===void 0?null:f],l},useContext:Io,useEffect:Kx,useImperativeHandle:function(l,f,g){return g=g!=null?g.concat([l]):null,hf(4194308,4,Gx.bind(null,f,l),g)},useLayoutEffect:function(l,f){return hf(4194308,4,l,f)},useInsertionEffect:function(l,f){return hf(4,2,l,f)},useMemo:function(l,f){var g=Ei();return f=f===void 0?null:f,l=l(),g.memoizedState=[l,f],l},useReducer:function(l,f,g){var x=Ei();return f=g!==void 0?g(f):f,x.memoizedState=x.baseState=f,l={pending:null,interleaved:null,lanes:0,dispatch:null,lastRenderedReducer:l,lastRenderedState:f},x.queue=l,l=l.dispatch=Y3.bind(null,Qn,l),[x.memoizedState,l]},useRef:function(l){var f=Ei();return l={current:l},f.memoizedState=l},useState:Wx,useDebugValue:yg,useDeferredValue:function(l){return Ei().memoizedState=l},useTransition:function(){var l=Wx(!1),f=l[0];return l=G3.bind(null,l[1]),Ei().memoizedState=l,[f,l]},useMutableSource:function(){},useSyncExternalStore:function(l,f,g){var x=Qn,I=Ei();if(Wn){if(g===void 0)throw Error(n(407));g=g()}else{if(g=f(),Cr===null)throw Error(n(349));as&30||zx(x,f,g)}I.memoizedState=g;var N={value:g,getSnapshot:f};return I.queue=N,Kx(Fx.bind(null,x,N,l),[l]),x.flags|=2048,_c(9,Hx.bind(null,x,N,g,f),void 0,null),g},useId:function(){var l=Ei(),f=Cr.identifierPrefix;if(Wn){var g=Wi,x=Vi;g=(x&~(1<<32-Gn(x)-1)).toString(32)+g,f=":"+f+"R"+g,g=Hc++,0<\/script>",l=l.removeChild(l.firstChild)):typeof x.is=="string"?l=X.createElement(g,{is:x.is}):(l=X.createElement(g),g==="select"&&(X=l,x.multiple?X.multiple=!0:x.size&&(X.size=x.size))):l=X.createElementNS(l,g),l[Si]=f,l[Rc]=x,CS(l,f,!1,!1),f.stateNode=l;e:{switch(X=Pe(g,x),g){case"dialog":zn("cancel",l),zn("close",l),I=x;break;case"iframe":case"object":case"embed":zn("load",l),I=x;break;case"video":case"audio":for(I=0;Ial&&(f.flags|=128,x=!0,Vc(N,!1),f.lanes=4194304)}else{if(!x)if(l=ff(X),l!==null){if(f.flags|=128,x=!0,g=l.updateQueue,g!==null&&(f.updateQueue=g,f.flags|=4),Vc(N,!0),N.tail===null&&N.tailMode==="hidden"&&!X.alternate&&!Wn)return zr(f),null}else 2*tn()-N.renderingStartTime>al&&g!==1073741824&&(f.flags|=128,x=!0,Vc(N,!1),f.lanes=4194304);N.isBackwards?(X.sibling=f.child,f.child=X):(g=N.last,g!==null?g.sibling=X:f.child=X,N.last=X)}return N.tail!==null?(f=N.tail,N.rendering=f,N.tail=f.sibling,N.renderingStartTime=tn(),f.sibling=null,g=Yn.current,Nn(Yn,x?g&1|2:g&1),f):(zr(f),null);case 22:case 23:return Vg(),x=f.memoizedState!==null,l!==null&&l.memoizedState!==null!==x&&(f.flags|=8192),x&&f.mode&1?po&1073741824&&(zr(f),f.subtreeFlags&6&&(f.flags|=8192)):zr(f),null;case 24:return null;case 25:return null}throw Error(n(156,f.tag))}function aD(l,f){switch(Zh(f),f.tag){case 1:return eo(f.type)&&ef(),l=f.flags,l&65536?(f.flags=l&-65537|128,f):null;case 3:return nl(),Hn(Jr),Hn(Br),dg(),l=f.flags,l&65536&&!(l&128)?(f.flags=l&-65537|128,f):null;case 5:return cg(f),null;case 13:if(Hn(Yn),l=f.memoizedState,l!==null&&l.dehydrated!==null){if(f.alternate===null)throw Error(n(340));Zs()}return l=f.flags,l&65536?(f.flags=l&-65537|128,f):null;case 19:return Hn(Yn),null;case 4:return nl(),null;case 10:return og(f.type._context),null;case 22:case 23:return Vg(),null;case 24:return null;default:return null}}var xf=!1,Hr=!1,sD=typeof WeakSet=="function"?WeakSet:Set,yt=null;function ol(l,f){var g=l.ref;if(g!==null)if(typeof g=="function")try{g(null)}catch(x){Jn(l,f,x)}else g.current=null}function Mg(l,f,g){try{g()}catch(x){Jn(l,f,x)}}var OS=!1;function lD(l,f){if(Vh=Hd,l=ox(),jh(l)){if("selectionStart"in l)var g={start:l.selectionStart,end:l.selectionEnd};else e:{g=(g=l.ownerDocument)&&g.defaultView||window;var x=g.getSelection&&g.getSelection();if(x&&x.rangeCount!==0){g=x.anchorNode;var I=x.anchorOffset,N=x.focusNode;x=x.focusOffset;try{g.nodeType,N.nodeType}catch{g=null;break e}var X=0,oe=-1,fe=-1,Ee=0,Xe=0,Qe=l,Ue=null;t:for(;;){for(var vt;Qe!==g||I!==0&&Qe.nodeType!==3||(oe=X+I),Qe!==N||x!==0&&Qe.nodeType!==3||(fe=X+x),Qe.nodeType===3&&(X+=Qe.nodeValue.length),(vt=Qe.firstChild)!==null;)Ue=Qe,Qe=vt;for(;;){if(Qe===l)break t;if(Ue===g&&++Ee===I&&(oe=X),Ue===N&&++Xe===x&&(fe=X),(vt=Qe.nextSibling)!==null)break;Qe=Ue,Ue=Qe.parentNode}Qe=vt}g=oe===-1||fe===-1?null:{start:oe,end:fe}}else g=null}g=g||{start:0,end:0}}else g=null;for(Wh={focusedElem:l,selectionRange:g},Hd=!1,yt=f;yt!==null;)if(f=yt,l=f.child,(f.subtreeFlags&1028)!==0&&l!==null)l.return=f,yt=l;else for(;yt!==null;){f=yt;try{var Ot=f.alternate;if(f.flags&1024)switch(f.tag){case 0:case 11:case 15:break;case 1:if(Ot!==null){var It=Ot.memoizedProps,nr=Ot.memoizedState,be=f.stateNode,he=be.getSnapshotBeforeUpdate(f.elementType===f.type?It:Jo(f.type,It),nr);be.__reactInternalSnapshotBeforeUpdate=he}break;case 3:var xe=f.stateNode.containerInfo;xe.nodeType===1?xe.textContent="":xe.nodeType===9&&xe.documentElement&&xe.removeChild(xe.documentElement);break;case 5:case 6:case 4:case 17:break;default:throw Error(n(163))}}catch(tt){Jn(f,f.return,tt)}if(l=f.sibling,l!==null){l.return=f.return,yt=l;break}yt=f.return}return Ot=OS,OS=!1,Ot}function Wc(l,f,g){var x=f.updateQueue;if(x=x!==null?x.lastEffect:null,x!==null){var I=x=x.next;do{if((I.tag&l)===l){var N=I.destroy;I.destroy=void 0,N!==void 0&&Mg(f,g,N)}I=I.next}while(I!==x)}}function Sf(l,f){if(f=f.updateQueue,f=f!==null?f.lastEffect:null,f!==null){var g=f=f.next;do{if((g.tag&l)===l){var x=g.create;g.destroy=x()}g=g.next}while(g!==f)}}function Ng(l){var f=l.ref;if(f!==null){var g=l.stateNode;switch(l.tag){case 5:l=g;break;default:l=g}typeof f=="function"?f(l):f.current=l}}function $S(l){var f=l.alternate;f!==null&&(l.alternate=null,$S(f)),l.child=null,l.deletions=null,l.sibling=null,l.tag===5&&(f=l.stateNode,f!==null&&(delete f[Si],delete f[Rc],delete f[Xh],delete f[W3],delete f[U3])),l.stateNode=null,l.return=null,l.dependencies=null,l.memoizedProps=null,l.memoizedState=null,l.pendingProps=null,l.stateNode=null,l.updateQueue=null}function IS(l){return l.tag===5||l.tag===3||l.tag===4}function TS(l){e:for(;;){for(;l.sibling===null;){if(l.return===null||IS(l.return))return null;l=l.return}for(l.sibling.return=l.return,l=l.sibling;l.tag!==5&&l.tag!==6&&l.tag!==18;){if(l.flags&2||l.child===null||l.tag===4)continue e;l.child.return=l,l=l.child}if(!(l.flags&2))return l.stateNode}}function Rg(l,f,g){var x=l.tag;if(x===5||x===6)l=l.stateNode,f?g.nodeType===8?g.parentNode.insertBefore(l,f):g.insertBefore(l,f):(g.nodeType===8?(f=g.parentNode,f.insertBefore(l,g)):(f=g,f.appendChild(l)),g=g._reactRootContainer,g!=null||f.onclick!==null||(f.onclick=Zd));else if(x!==4&&(l=l.child,l!==null))for(Rg(l,f,g),l=l.sibling;l!==null;)Rg(l,f,g),l=l.sibling}function Dg(l,f,g){var x=l.tag;if(x===5||x===6)l=l.stateNode,f?g.insertBefore(l,f):g.appendChild(l);else if(x!==4&&(l=l.child,l!==null))for(Dg(l,f,g),l=l.sibling;l!==null;)Dg(l,f,g),l=l.sibling}var Pr=null,ei=!1;function wa(l,f,g){for(g=g.child;g!==null;)PS(l,f,g),g=g.sibling}function PS(l,f,g){if(hn&&typeof hn.onCommitFiberUnmount=="function")try{hn.onCommitFiberUnmount(Zt,g)}catch{}switch(g.tag){case 5:Hr||ol(g,f);case 6:var x=Pr,I=ei;Pr=null,wa(l,f,g),Pr=x,ei=I,Pr!==null&&(ei?(l=Pr,g=g.stateNode,l.nodeType===8?l.parentNode.removeChild(g):l.removeChild(g)):Pr.removeChild(g.stateNode));break;case 18:Pr!==null&&(ei?(l=Pr,g=g.stateNode,l.nodeType===8?qh(l.parentNode,g):l.nodeType===1&&qh(l,g),xi(l)):qh(Pr,g.stateNode));break;case 4:x=Pr,I=ei,Pr=g.stateNode.containerInfo,ei=!0,wa(l,f,g),Pr=x,ei=I;break;case 0:case 11:case 14:case 15:if(!Hr&&(x=g.updateQueue,x!==null&&(x=x.lastEffect,x!==null))){I=x=x.next;do{var N=I,X=N.destroy;N=N.tag,X!==void 0&&(N&2||N&4)&&Mg(g,f,X),I=I.next}while(I!==x)}wa(l,f,g);break;case 1:if(!Hr&&(ol(g,f),x=g.stateNode,typeof x.componentWillUnmount=="function"))try{x.props=g.memoizedProps,x.state=g.memoizedState,x.componentWillUnmount()}catch(oe){Jn(g,f,oe)}wa(l,f,g);break;case 21:wa(l,f,g);break;case 22:g.mode&1?(Hr=(x=Hr)||g.memoizedState!==null,wa(l,f,g),Hr=x):wa(l,f,g);break;default:wa(l,f,g)}}function MS(l){var f=l.updateQueue;if(f!==null){l.updateQueue=null;var g=l.stateNode;g===null&&(g=l.stateNode=new sD),f.forEach(function(x){var I=mD.bind(null,l,x);g.has(x)||(g.add(x),x.then(I,I))})}}function ti(l,f){var g=f.deletions;if(g!==null)for(var x=0;xI&&(I=X),x&=~N}if(x=I,x=tn()-x,x=(120>x?120:480>x?480:1080>x?1080:1920>x?1920:3e3>x?3e3:4320>x?4320:1960*uD(x/1960))-x,10l?16:l,Sa===null)var x=!1;else{if(l=Sa,Sa=null,$f=0,mn&6)throw Error(n(331));var I=mn;for(mn|=4,yt=l.current;yt!==null;){var N=yt,X=N.child;if(yt.flags&16){var oe=N.deletions;if(oe!==null){for(var fe=0;fetn()-Bg?cs(l,0):Lg|=g),ro(l,f)}function WS(l,f){f===0&&(l.mode&1?(f=Tn,Tn<<=1,!(Tn&130023424)&&(Tn=4194304)):f=1);var g=Kr();l=Ui(l,f),l!==null&&(Cn(l,f,g),ro(l,g))}function gD(l){var f=l.memoizedState,g=0;f!==null&&(g=f.retryLane),WS(l,g)}function mD(l,f){var g=0;switch(l.tag){case 13:var x=l.stateNode,I=l.memoizedState;I!==null&&(g=I.retryLane);break;case 19:x=l.stateNode;break;default:throw Error(n(314))}x!==null&&x.delete(f),WS(l,g)}var US;US=function(l,f,g){if(l!==null)if(l.memoizedProps!==f.pendingProps||Jr.current)to=!0;else{if(!(l.lanes&g)&&!(f.flags&128))return to=!1,oD(l,f,g);to=!!(l.flags&131072)}else to=!1,Wn&&f.flags&1048576&&Ex(f,of,f.index);switch(f.lanes=0,f.tag){case 2:var x=f.type;wf(l,f),l=f.pendingProps;var I=Gs(f,Br.current);tl(f,g),I=vg(null,f,x,l,I,g);var N=hg();return f.flags|=1,typeof I=="object"&&I!==null&&typeof I.render=="function"&&I.$$typeof===void 0?(f.tag=1,f.memoizedState=null,f.updateQueue=null,eo(x)?(N=!0,tf(f)):N=!1,f.memoizedState=I.state!==null&&I.state!==void 0?I.state:null,sg(f),I.updater=bf,f.stateNode=I,I._reactInternals=f,xg(f,x,l,g),f=kg(null,f,x,!0,N,g)):(f.tag=0,Wn&&N&&Qh(f),Ur(null,f,I,g),f=f.child),f;case 16:x=f.elementType;e:{switch(wf(l,f),l=f.pendingProps,I=x._init,x=I(x._payload),f.type=x,I=f.tag=yD(x),l=Jo(x,l),I){case 0:f=Eg(null,f,x,l,g);break e;case 1:f=mS(null,f,x,l,g);break e;case 11:f=fS(null,f,x,l,g);break e;case 14:f=pS(null,f,x,Jo(x.type,l),g);break e}throw Error(n(306,x,""))}return f;case 0:return x=f.type,I=f.pendingProps,I=f.elementType===x?I:Jo(x,I),Eg(l,f,x,I,g);case 1:return x=f.type,I=f.pendingProps,I=f.elementType===x?I:Jo(x,I),mS(l,f,x,I,g);case 3:e:{if(bS(f),l===null)throw Error(n(387));x=f.pendingProps,N=f.memoizedState,I=N.element,Rx(l,f),df(f,x,null,g);var X=f.memoizedState;if(x=X.element,N.isDehydrated)if(N={element:x,isDehydrated:!1,cache:X.cache,pendingSuspenseBoundaries:X.pendingSuspenseBoundaries,transitions:X.transitions},f.updateQueue.baseState=N,f.memoizedState=N,f.flags&256){I=rl(Error(n(423)),f),f=yS(l,f,x,g,I);break e}else if(x!==I){I=rl(Error(n(424)),f),f=yS(l,f,x,g,I);break e}else for(fo=va(f.stateNode.containerInfo.firstChild),uo=f,Wn=!0,Zo=null,g=Mx(f,null,x,g),f.child=g;g;)g.flags=g.flags&-3|4096,g=g.sibling;else{if(Zs(),x===I){f=qi(l,f,g);break e}Ur(l,f,x,g)}f=f.child}return f;case 5:return Lx(f),l===null&&eg(f),x=f.type,I=f.pendingProps,N=l!==null?l.memoizedProps:null,X=I.children,Uh(x,I)?X=null:N!==null&&Uh(x,N)&&(f.flags|=32),gS(l,f),Ur(l,f,X,g),f.child;case 6:return l===null&&eg(f),null;case 13:return wS(l,f,g);case 4:return lg(f,f.stateNode.containerInfo),x=f.pendingProps,l===null?f.child=Js(f,null,x,g):Ur(l,f,x,g),f.child;case 11:return x=f.type,I=f.pendingProps,I=f.elementType===x?I:Jo(x,I),fS(l,f,x,I,g);case 7:return Ur(l,f,f.pendingProps,g),f.child;case 8:return Ur(l,f,f.pendingProps.children,g),f.child;case 12:return Ur(l,f,f.pendingProps.children,g),f.child;case 10:e:{if(x=f.type._context,I=f.pendingProps,N=f.memoizedProps,X=I.value,Nn(lf,x._currentValue),x._currentValue=X,N!==null)if(Qo(N.value,X)){if(N.children===I.children&&!Jr.current){f=qi(l,f,g);break e}}else for(N=f.child,N!==null&&(N.return=f);N!==null;){var oe=N.dependencies;if(oe!==null){X=N.child;for(var fe=oe.firstContext;fe!==null;){if(fe.context===x){if(N.tag===1){fe=Ki(-1,g&-g),fe.tag=2;var Ee=N.updateQueue;if(Ee!==null){Ee=Ee.shared;var Xe=Ee.pending;Xe===null?fe.next=fe:(fe.next=Xe.next,Xe.next=fe),Ee.pending=fe}}N.lanes|=g,fe=N.alternate,fe!==null&&(fe.lanes|=g),ig(N.return,g,f),oe.lanes|=g;break}fe=fe.next}}else if(N.tag===10)X=N.type===f.type?null:N.child;else if(N.tag===18){if(X=N.return,X===null)throw Error(n(341));X.lanes|=g,oe=X.alternate,oe!==null&&(oe.lanes|=g),ig(X,g,f),X=N.sibling}else X=N.child;if(X!==null)X.return=N;else for(X=N;X!==null;){if(X===f){X=null;break}if(N=X.sibling,N!==null){N.return=X.return,X=N;break}X=X.return}N=X}Ur(l,f,I.children,g),f=f.child}return f;case 9:return I=f.type,x=f.pendingProps.children,tl(f,g),I=Io(I),x=x(I),f.flags|=1,Ur(l,f,x,g),f.child;case 14:return x=f.type,I=Jo(x,f.pendingProps),I=Jo(x.type,I),pS(l,f,x,I,g);case 15:return vS(l,f,f.type,f.pendingProps,g);case 17:return x=f.type,I=f.pendingProps,I=f.elementType===x?I:Jo(x,I),wf(l,f),f.tag=1,eo(x)?(l=!0,tf(f)):l=!1,tl(f,g),iS(f,x,I),xg(f,x,I,g),kg(null,f,x,!0,l,g);case 19:return SS(l,f,g);case 22:return hS(l,f,g)}throw Error(n(156,f.tag))};function KS(l,f){return lt(l,f)}function bD(l,f,g,x){this.tag=l,this.key=g,this.sibling=this.child=this.return=this.stateNode=this.type=this.elementType=null,this.index=0,this.ref=null,this.pendingProps=f,this.dependencies=this.memoizedState=this.updateQueue=this.memoizedProps=null,this.mode=x,this.subtreeFlags=this.flags=0,this.deletions=null,this.childLanes=this.lanes=0,this.alternate=null}function Mo(l,f,g,x){return new bD(l,f,g,x)}function Ug(l){return l=l.prototype,!(!l||!l.isReactComponent)}function yD(l){if(typeof l=="function")return Ug(l)?1:0;if(l!=null){if(l=l.$$typeof,l===V)return 11;if(l===_)return 14}return 2}function ka(l,f){var g=l.alternate;return g===null?(g=Mo(l.tag,f,l.key,l.mode),g.elementType=l.elementType,g.type=l.type,g.stateNode=l.stateNode,g.alternate=l,l.alternate=g):(g.pendingProps=f,g.type=l.type,g.flags=0,g.subtreeFlags=0,g.deletions=null),g.flags=l.flags&14680064,g.childLanes=l.childLanes,g.lanes=l.lanes,g.child=l.child,g.memoizedProps=l.memoizedProps,g.memoizedState=l.memoizedState,g.updateQueue=l.updateQueue,f=l.dependencies,g.dependencies=f===null?null:{lanes:f.lanes,firstContext:f.firstContext},g.sibling=l.sibling,g.index=l.index,g.ref=l.ref,g}function Mf(l,f,g,x,I,N){var X=2;if(x=l,typeof l=="function")Ug(l)&&(X=1);else if(typeof l=="string")X=5;else e:switch(l){case T:return ds(g.children,I,N,f);case M:X=8,I|=8;break;case P:return l=Mo(12,g,f,I|2),l.elementType=P,l.lanes=N,l;case z:return l=Mo(13,g,f,I),l.elementType=z,l.lanes=N,l;case B:return l=Mo(19,g,f,I),l.elementType=B,l.lanes=N,l;case j:return Nf(g,I,N,f);default:if(typeof l=="object"&&l!==null)switch(l.$$typeof){case R:X=10;break e;case A:X=9;break e;case V:X=11;break e;case _:X=14;break e;case H:X=16,x=null;break e}throw Error(n(130,l==null?l:typeof l,""))}return f=Mo(X,g,f,I),f.elementType=l,f.type=x,f.lanes=N,f}function ds(l,f,g,x){return l=Mo(7,l,x,f),l.lanes=g,l}function Nf(l,f,g,x){return l=Mo(22,l,x,f),l.elementType=j,l.lanes=g,l.stateNode={isHidden:!1},l}function Kg(l,f,g){return l=Mo(6,l,null,f),l.lanes=g,l}function qg(l,f,g){return f=Mo(4,l.children!==null?l.children:[],l.key,f),f.lanes=g,f.stateNode={containerInfo:l.containerInfo,pendingChildren:null,implementation:l.implementation},f}function wD(l,f,g,x,I){this.tag=f,this.containerInfo=l,this.finishedWork=this.pingCache=this.current=this.pendingChildren=null,this.timeoutHandle=-1,this.callbackNode=this.pendingContext=this.context=null,this.callbackPriority=0,this.eventTimes=_n(0),this.expirationTimes=_n(-1),this.entangledLanes=this.finishedLanes=this.mutableReadLanes=this.expiredLanes=this.pingedLanes=this.suspendedLanes=this.pendingLanes=0,this.entanglements=_n(0),this.identifierPrefix=x,this.onRecoverableError=I,this.mutableSourceEagerHydrationData=null}function Xg(l,f,g,x,I,N,X,oe,fe){return l=new wD(l,f,g,oe,fe),f===1?(f=1,N===!0&&(f|=8)):f=0,N=Mo(3,null,null,f),l.current=N,N.stateNode=l,N.memoizedState={element:x,isDehydrated:g,cache:null,transitions:null,pendingSuspenseBoundaries:null},sg(N),l}function xD(l,f,g){var x=3"u"||typeof __REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE!="function"))try{__REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE(T2)}catch{}}T2();I2.exports=wL();var pi=I2.exports;const Hu=js(pi),xL=v2({__proto__:null,default:Hu},[pi]);var P2=pi;kv.createRoot=P2.createRoot;kv.hydrateRoot=P2.hydrateRoot;var M2={exports:{}};(function(e){(function(){var t={}.hasOwnProperty;function n(){for(var i="",a=0;a1&&arguments[1]!==void 0?arguments[1]:{},n=[];return ue.Children.forEach(e,function(r){r==null&&!t.keepEmpty||(Array.isArray(r)?n=n.concat(lo(r)):yu.isFragment(r)&&r.props?n=n.concat(lo(r.props.children,t)):n.push(r))}),n}var T0={},EL=function(t){};function kL(e,t){}function OL(e,t){}function $L(){T0={}}function R2(e,t,n){!t&&!T0[n]&&(e(!1,n),T0[n]=!0)}function Fn(e,t){R2(kL,e,t)}function IL(e,t){R2(OL,e,t)}Fn.preMessage=EL;Fn.resetWarned=$L;Fn.noteOnce=IL;function st(e){"@babel/helpers - typeof";return st=typeof Symbol=="function"&&typeof Symbol.iterator=="symbol"?function(t){return typeof t}:function(t){return t&&typeof Symbol=="function"&&t.constructor===Symbol&&t!==Symbol.prototype?"symbol":typeof t},st(e)}function TL(e,t){if(st(e)!="object"||!e)return e;var n=e[Symbol.toPrimitive];if(n!==void 0){var r=n.call(e,t||"default");if(st(r)!="object")return r;throw new TypeError("@@toPrimitive must return a primitive value.")}return(t==="string"?String:Number)(e)}function D2(e){var t=TL(e,"string");return st(t)=="symbol"?t:t+""}function K(e,t,n){return(t=D2(t))in e?Object.defineProperty(e,t,{value:n,enumerable:!0,configurable:!0,writable:!0}):e[t]=n,e}function vC(e,t){var n=Object.keys(e);if(Object.getOwnPropertySymbols){var r=Object.getOwnPropertySymbols(e);t&&(r=r.filter(function(o){return Object.getOwnPropertyDescriptor(e,o).enumerable})),n.push.apply(n,r)}return n}function Z(e){for(var t=1;t=19;var M0=d.createContext(null);function ML(e){var t=e.children,n=e.onBatchResize,r=d.useRef(0),o=d.useRef([]),i=d.useContext(M0),a=d.useCallback(function(s,c,u){r.current+=1;var p=r.current;o.current.push({size:s,element:c,data:u}),Promise.resolve().then(function(){p===r.current&&(n==null||n(o.current),o.current=[])}),i==null||i(s,c,u)},[n,i]);return d.createElement(M0.Provider,{value:a},t)}var j2=function(){if(typeof Map<"u")return Map;function e(t,n){var r=-1;return t.some(function(o,i){return o[0]===n?(r=i,!0):!1}),r}return function(){function t(){this.__entries__=[]}return Object.defineProperty(t.prototype,"size",{get:function(){return this.__entries__.length},enumerable:!0,configurable:!0}),t.prototype.get=function(n){var r=e(this.__entries__,n),o=this.__entries__[r];return o&&o[1]},t.prototype.set=function(n,r){var o=e(this.__entries__,n);~o?this.__entries__[o][1]=r:this.__entries__.push([n,r])},t.prototype.delete=function(n){var r=this.__entries__,o=e(r,n);~o&&r.splice(o,1)},t.prototype.has=function(n){return!!~e(this.__entries__,n)},t.prototype.clear=function(){this.__entries__.splice(0)},t.prototype.forEach=function(n,r){r===void 0&&(r=null);for(var o=0,i=this.__entries__;o0},e.prototype.connect_=function(){!N0||this.connected_||(document.addEventListener("transitionend",this.onTransitionEnd_),window.addEventListener("resize",this.refresh),BL?(this.mutationsObserver_=new MutationObserver(this.refresh),this.mutationsObserver_.observe(document,{attributes:!0,childList:!0,characterData:!0,subtree:!0})):(document.addEventListener("DOMSubtreeModified",this.refresh),this.mutationEventsAdded_=!0),this.connected_=!0)},e.prototype.disconnect_=function(){!N0||!this.connected_||(document.removeEventListener("transitionend",this.onTransitionEnd_),window.removeEventListener("resize",this.refresh),this.mutationsObserver_&&this.mutationsObserver_.disconnect(),this.mutationEventsAdded_&&document.removeEventListener("DOMSubtreeModified",this.refresh),this.mutationsObserver_=null,this.mutationEventsAdded_=!1,this.connected_=!1)},e.prototype.onTransitionEnd_=function(t){var n=t.propertyName,r=n===void 0?"":n,o=LL.some(function(i){return!!~r.indexOf(i)});o&&this.refresh()},e.getInstance=function(){return this.instance_||(this.instance_=new e),this.instance_},e.instance_=null,e}(),L2=function(e,t){for(var n=0,r=Object.keys(t);n"u"||!(Element instanceof Object))){if(!(t instanceof _l(t).Element))throw new TypeError('parameter 1 is not of type "Element".');var n=this.observations_;n.has(t)||(n.set(t,new KL(t)),this.controller_.addObserver(this),this.controller_.refresh())}},e.prototype.unobserve=function(t){if(!arguments.length)throw new TypeError("1 argument required, but only 0 present.");if(!(typeof Element>"u"||!(Element instanceof Object))){if(!(t instanceof _l(t).Element))throw new TypeError('parameter 1 is not of type "Element".');var n=this.observations_;n.has(t)&&(n.delete(t),n.size||this.controller_.removeObserver(this))}},e.prototype.disconnect=function(){this.clearActive(),this.observations_.clear(),this.controller_.removeObserver(this)},e.prototype.gatherActive=function(){var t=this;this.clearActive(),this.observations_.forEach(function(n){n.isActive()&&t.activeObservations_.push(n)})},e.prototype.broadcastActive=function(){if(this.hasActive()){var t=this.callbackCtx_,n=this.activeObservations_.map(function(r){return new qL(r.target,r.broadcastRect())});this.callback_.call(t,n,t),this.clearActive()}},e.prototype.clearActive=function(){this.activeObservations_.splice(0)},e.prototype.hasActive=function(){return this.activeObservations_.length>0},e}(),A2=typeof WeakMap<"u"?new WeakMap:new j2,z2=function(){function e(t){if(!(this instanceof e))throw new TypeError("Cannot call a class as a function.");if(!arguments.length)throw new TypeError("1 argument required, but only 0 present.");var n=AL.getInstance(),r=new XL(t,n,this);A2.set(this,r)}return e}();["observe","unobserve","disconnect"].forEach(function(e){z2.prototype[e]=function(){var t;return(t=A2.get(this))[e].apply(t,arguments)}});var GL=function(){return typeof Vp.ResizeObserver<"u"?Vp.ResizeObserver:z2}(),Ba=new Map;function YL(e){e.forEach(function(t){var n,r=t.target;(n=Ba.get(r))===null||n===void 0||n.forEach(function(o){return o(r)})})}var H2=new GL(YL);function QL(e,t){Ba.has(e)||(Ba.set(e,new Set),H2.observe(e)),Ba.get(e).add(t)}function ZL(e,t){Ba.has(e)&&(Ba.get(e).delete(t),Ba.get(e).size||(H2.unobserve(e),Ba.delete(e)))}function Kn(e,t){if(!(e instanceof t))throw new TypeError("Cannot call a class as a function")}function gC(e,t){for(var n=0;ne.length)&&(t=e.length);for(var n=0,r=Array(t);n1&&arguments[1]!==void 0?arguments[1]:1;mC+=1;var r=mC;function o(i){if(i===0)W2(r),t();else{var a=_2(function(){o(i-1)});Ay.set(r,a)}}return o(n),r};bn.cancel=function(e){var t=Ay.get(e);return W2(e),V2(t)};function U2(e){if(Array.isArray(e))return e}function s6(e,t){var n=e==null?null:typeof Symbol<"u"&&e[Symbol.iterator]||e["@@iterator"];if(n!=null){var r,o,i,a,s=[],c=!0,u=!1;try{if(i=(n=n.call(e)).next,t===0){if(Object(n)!==n)return;c=!1}else for(;!(c=(r=i.call(n)).done)&&(s.push(r.value),s.length!==t);c=!0);}catch(p){u=!0,o=p}finally{try{if(!c&&n.return!=null&&(a=n.return(),Object(a)!==a))return}finally{if(u)throw o}}return s}}function K2(){throw new TypeError(`Invalid attempt to destructure non-iterable instance. -In order to be iterable, non-array objects must have a [Symbol.iterator]() method.`)}function ve(e,t){return U2(e)||s6(e,t)||By(e,t)||K2()}function Uu(e){for(var t=0,n,r=0,o=e.length;o>=4;++r,o-=4)n=e.charCodeAt(r)&255|(e.charCodeAt(++r)&255)<<8|(e.charCodeAt(++r)&255)<<16|(e.charCodeAt(++r)&255)<<24,n=(n&65535)*1540483477+((n>>>16)*59797<<16),n^=n>>>24,t=(n&65535)*1540483477+((n>>>16)*59797<<16)^(t&65535)*1540483477+((t>>>16)*59797<<16);switch(o){case 3:t^=(e.charCodeAt(r+2)&255)<<16;case 2:t^=(e.charCodeAt(r+1)&255)<<8;case 1:t^=e.charCodeAt(r)&255,t=(t&65535)*1540483477+((t>>>16)*59797<<16)}return t^=t>>>13,t=(t&65535)*1540483477+((t>>>16)*59797<<16),((t^t>>>15)>>>0).toString(36)}function $r(){return!!(typeof window<"u"&&window.document&&window.document.createElement)}function D0(e,t){if(!e)return!1;if(e.contains)return e.contains(t);for(var n=t;n;){if(n===e)return!0;n=n.parentNode}return!1}var bC="data-rc-order",yC="data-rc-priority",l6="rc-util-key",j0=new Map;function q2(){var e=arguments.length>0&&arguments[0]!==void 0?arguments[0]:{},t=e.mark;return t?t.startsWith("data-")?t:"data-".concat(t):l6}function $v(e){if(e.attachTo)return e.attachTo;var t=document.querySelector("head");return t||document.body}function c6(e){return e==="queue"?"prependQueue":e?"prepend":"append"}function zy(e){return Array.from((j0.get(e)||e).children).filter(function(t){return t.tagName==="STYLE"})}function X2(e){var t=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{};if(!$r())return null;var n=t.csp,r=t.prepend,o=t.priority,i=o===void 0?0:o,a=c6(r),s=a==="prependQueue",c=document.createElement("style");c.setAttribute(bC,a),s&&i&&c.setAttribute(yC,"".concat(i)),n!=null&&n.nonce&&(c.nonce=n==null?void 0:n.nonce),c.innerHTML=e;var u=$v(t),p=u.firstChild;if(r){if(s){var v=(t.styles||zy(u)).filter(function(h){if(!["prepend","prependQueue"].includes(h.getAttribute(bC)))return!1;var m=Number(h.getAttribute(yC)||0);return i>=m});if(v.length)return u.insertBefore(c,v[v.length-1].nextSibling),c}u.insertBefore(c,p)}else u.appendChild(c);return c}function G2(e){var t=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{},n=$v(t);return(t.styles||zy(n)).find(function(r){return r.getAttribute(q2(t))===e})}function Ku(e){var t=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{},n=G2(e,t);if(n){var r=$v(t);r.removeChild(n)}}function u6(e,t){var n=j0.get(e);if(!n||!D0(document,n)){var r=X2("",t),o=r.parentNode;j0.set(e,o),e.removeChild(r)}}function ea(e,t){var n=arguments.length>2&&arguments[2]!==void 0?arguments[2]:{},r=$v(n),o=zy(r),i=Z(Z({},n),{},{styles:o});u6(r,i);var a=G2(t,i);if(a){var s,c;if((s=i.csp)!==null&&s!==void 0&&s.nonce&&a.nonce!==((c=i.csp)===null||c===void 0?void 0:c.nonce)){var u;a.nonce=(u=i.csp)===null||u===void 0?void 0:u.nonce}return a.innerHTML!==e&&(a.innerHTML=e),a}var p=X2(e,i);return p.setAttribute(q2(i),t),p}function d6(e,t){if(e==null)return{};var n={};for(var r in e)if({}.hasOwnProperty.call(e,r)){if(t.includes(r))continue;n[r]=e[r]}return n}function Mt(e,t){if(e==null)return{};var n,r,o=d6(e,t);if(Object.getOwnPropertySymbols){var i=Object.getOwnPropertySymbols(e);for(r=0;r2&&arguments[2]!==void 0?arguments[2]:!1,r=new Set;function o(i,a){var s=arguments.length>2&&arguments[2]!==void 0?arguments[2]:1,c=r.has(i);if(Fn(!c,"Warning: There may be circular references"),c)return!1;if(i===a)return!0;if(n&&s>1)return!1;r.add(i);var u=s+1;if(Array.isArray(i)){if(!Array.isArray(a)||i.length!==a.length)return!1;for(var p=0;p1&&arguments[1]!==void 0?arguments[1]:!1,a={map:this.cache};return n.forEach(function(s){if(!a)a=void 0;else{var c;a=(c=a)===null||c===void 0||(c=c.map)===null||c===void 0?void 0:c.get(s)}}),(r=a)!==null&&r!==void 0&&r.value&&i&&(a.value[1]=this.cacheCallTimes++),(o=a)===null||o===void 0?void 0:o.value}},{key:"get",value:function(n){var r;return(r=this.internalGet(n,!0))===null||r===void 0?void 0:r[0]}},{key:"has",value:function(n){return!!this.internalGet(n)}},{key:"set",value:function(n,r){var o=this;if(!this.has(n)){if(this.size()+1>e.MAX_CACHE_SIZE+e.MAX_CACHE_OFFSET){var i=this.keys.reduce(function(u,p){var v=ve(u,2),h=v[1];return o.internalGet(p)[1]0,void 0),wC+=1}return qn(e,[{key:"getDerivativeToken",value:function(n){return this.derivatives.reduce(function(r,o){return o(n,r)},void 0)}}]),e}(),em=new Hy;function qu(e){var t=Array.isArray(e)?e:[e];return em.has(t)||em.set(t,new Y2(t)),em.get(t)}var g6=new WeakMap,tm={};function m6(e,t){for(var n=g6,r=0;r1&&arguments[1]!==void 0?arguments[1]:!1,n=xC.get(e)||"";return n||(Object.keys(e).forEach(function(r){var o=e[r];n+=r,o instanceof Y2?n+=o.id:o&&st(o)==="object"?n+=xu(o,t):n+=o}),t&&(n=Uu(n)),xC.set(e,n)),n}function SC(e,t){return Uu("".concat(t,"_").concat(xu(e,!0)))}var B0=$r();function de(e){return typeof e=="number"?"".concat(e,"px"):e}function Up(e,t,n){var r=arguments.length>3&&arguments[3]!==void 0?arguments[3]:{},o=arguments.length>4&&arguments[4]!==void 0?arguments[4]:!1;if(o)return e;var i=Z(Z({},r),{},K(K({},Vl,t),ui,n)),a=Object.keys(i).map(function(s){var c=i[s];return c?"".concat(s,'="').concat(c,'"'):null}).filter(function(s){return s}).join(" ");return"")}var Ep=function(t){var n=arguments.length>1&&arguments[1]!==void 0?arguments[1]:"";return"--".concat(n?"".concat(n,"-"):"").concat(t).replace(/([a-z0-9])([A-Z])/g,"$1-$2").replace(/([A-Z]+)([A-Z][a-z0-9]+)/g,"$1-$2").replace(/([a-z])([A-Z0-9])/g,"$1-$2").toLowerCase()},b6=function(t,n,r){return Object.keys(t).length?".".concat(n).concat(r!=null&&r.scope?".".concat(r.scope):"","{").concat(Object.entries(t).map(function(o){var i=ve(o,2),a=i[0],s=i[1];return"".concat(a,":").concat(s,";")}).join(""),"}"):""},Q2=function(t,n,r){var o={},i={};return Object.entries(t).forEach(function(a){var s,c,u=ve(a,2),p=u[0],v=u[1];if(r!=null&&(s=r.preserve)!==null&&s!==void 0&&s[p])i[p]=v;else if((typeof v=="string"||typeof v=="number")&&!(r!=null&&(c=r.ignore)!==null&&c!==void 0&&c[p])){var h,m=Ep(p,r==null?void 0:r.prefix);o[m]=typeof v=="number"&&!(r!=null&&(h=r.unitless)!==null&&h!==void 0&&h[p])?"".concat(v,"px"):String(v),i[p]="var(".concat(m,")")}}),[i,b6(o,n,{scope:r==null?void 0:r.scope})]},CC=$r()?d.useLayoutEffect:d.useEffect,sn=function(t,n){var r=d.useRef(!0);CC(function(){return t(r.current)},n),CC(function(){return r.current=!1,function(){r.current=!0}},[])},EC=function(t,n){sn(function(r){if(!r)return t()},n)},y6=Z({},Ev),kC=y6.useInsertionEffect,w6=function(t,n,r){d.useMemo(t,r),sn(function(){return n(!0)},r)},x6=kC?function(e,t,n){return kC(function(){return e(),t()},n)}:w6,S6=Z({},Ev),C6=S6.useInsertionEffect,E6=function(t){var n=[],r=!1;function o(i){r||n.push(i)}return d.useEffect(function(){return r=!1,function(){r=!0,n.length&&n.forEach(function(i){return i()})}},t),o},k6=function(){return function(t){t()}},O6=typeof C6<"u"?E6:k6;function Fy(e,t,n,r,o){var i=d.useContext(Iv),a=i.cache,s=[e].concat(Se(t)),c=L0(s),u=O6([c]),p=function(b){a.opUpdate(c,function(y){var w=y||[void 0,void 0],C=ve(w,2),S=C[0],E=S===void 0?0:S,k=C[1],O=k,$=O||n(),T=[E,$];return b?b(T):T})};d.useMemo(function(){p()},[c]);var v=a.opGet(c),h=v[1];return x6(function(){o==null||o(h)},function(m){return p(function(b){var y=ve(b,2),w=y[0],C=y[1];return m&&w===0&&(o==null||o(h)),[w+1,C]}),function(){a.opUpdate(c,function(b){var y=b||[],w=ve(y,2),C=w[0],S=C===void 0?0:C,E=w[1],k=S-1;return k===0?(u(function(){(m||!a.opGet(c))&&(r==null||r(E,!1))}),null):[S-1,E]})}},[c]),h}var $6={},I6="css",ms=new Map;function T6(e){ms.set(e,(ms.get(e)||0)+1)}function P6(e,t){if(typeof document<"u"){var n=document.querySelectorAll("style[".concat(Vl,'="').concat(e,'"]'));n.forEach(function(r){if(r[Aa]===t){var o;(o=r.parentNode)===null||o===void 0||o.removeChild(r)}})}}var M6=0;function N6(e,t){ms.set(e,(ms.get(e)||0)-1);var n=Array.from(ms.keys()),r=n.filter(function(o){var i=ms.get(o)||0;return i<=0});n.length-r.length>M6&&r.forEach(function(o){P6(o,t),ms.delete(o)})}var Z2=function(t,n,r,o){var i=r.getDerivativeToken(t),a=Z(Z({},i),n);return o&&(a=o(a)),a},J2="token";function R6(e,t){var n=arguments.length>2&&arguments[2]!==void 0?arguments[2]:{},r=d.useContext(Iv),o=r.cache.instanceId,i=r.container,a=n.salt,s=a===void 0?"":a,c=n.override,u=c===void 0?$6:c,p=n.formatToken,v=n.getComputedToken,h=n.cssVar,m=m6(function(){return Object.assign.apply(Object,[{}].concat(Se(t)))},t),b=xu(m),y=xu(u),w=h?xu(h):"",C=Fy(J2,[s,e.id,b,y,w],function(){var S,E=v?v(m,u,e):Z2(m,u,e,p),k=Z({},E),O="";if(h){var $=Q2(E,h.key,{prefix:h.prefix,ignore:h.ignore,unitless:h.unitless,preserve:h.preserve}),T=ve($,2);E=T[0],O=T[1]}var M=SC(E,s);E._tokenKey=M,k._tokenKey=SC(k,s);var P=(S=h==null?void 0:h.key)!==null&&S!==void 0?S:M;E._themeKey=P,T6(P);var R="".concat(I6,"-").concat(Uu(M));return E._hashId=R,[E,R,k,O,(h==null?void 0:h.key)||""]},function(S){N6(S[0]._themeKey,o)},function(S){var E=ve(S,4),k=E[0],O=E[3];if(h&&O){var $=ea(O,Uu("css-variables-".concat(k._themeKey)),{mark:ui,prepend:"queue",attachTo:i,priority:-999});$[Aa]=o,$.setAttribute(Vl,k._themeKey)}});return C}var D6=function(t,n,r){var o=ve(t,5),i=o[2],a=o[3],s=o[4],c=r||{},u=c.plain;if(!a)return null;var p=i._tokenKey,v=-999,h={"data-rc-order":"prependQueue","data-rc-priority":"".concat(v)},m=Up(a,s,p,h,u);return[v,p,m]},j6={animationIterationCount:1,borderImageOutset:1,borderImageSlice:1,borderImageWidth:1,boxFlex:1,boxFlexGroup:1,boxOrdinalGroup:1,columnCount:1,columns:1,flex:1,flexGrow:1,flexPositive:1,flexShrink:1,flexNegative:1,flexOrder:1,gridRow:1,gridRowEnd:1,gridRowSpan:1,gridRowStart:1,gridColumn:1,gridColumnEnd:1,gridColumnSpan:1,gridColumnStart:1,msGridRow:1,msGridRowSpan:1,msGridColumn:1,msGridColumnSpan:1,fontWeight:1,lineHeight:1,opacity:1,order:1,orphans:1,tabSize:1,widows:1,zIndex:1,zoom:1,WebkitLineClamp:1,fillOpacity:1,floodOpacity:1,stopOpacity:1,strokeDasharray:1,strokeDashoffset:1,strokeMiterlimit:1,strokeOpacity:1,strokeWidth:1},eI="comm",tI="rule",nI="decl",L6="@import",B6="@keyframes",A6="@layer",rI=Math.abs,_y=String.fromCharCode;function oI(e){return e.trim()}function kp(e,t,n){return e.replace(t,n)}function z6(e,t,n){return e.indexOf(t,n)}function Xu(e,t){return e.charCodeAt(t)|0}function Wl(e,t,n){return e.slice(t,n)}function Pi(e){return e.length}function H6(e){return e.length}function Hf(e,t){return t.push(e),e}var Tv=1,Ul=1,iI=0,Uo=0,ur=0,oc="";function Vy(e,t,n,r,o,i,a,s){return{value:e,root:t,parent:n,type:r,props:o,children:i,line:Tv,column:Ul,length:a,return:"",siblings:s}}function F6(){return ur}function _6(){return ur=Uo>0?Xu(oc,--Uo):0,Ul--,ur===10&&(Ul=1,Tv--),ur}function di(){return ur=Uo2||Gu(ur)>3?"":" "}function K6(e,t){for(;--t&&di()&&!(ur<48||ur>102||ur>57&&ur<65||ur>70&&ur<97););return Pv(e,Op()+(t<6&&za()==32&&di()==32))}function A0(e){for(;di();)switch(ur){case e:return Uo;case 34:case 39:e!==34&&e!==39&&A0(ur);break;case 40:e===41&&A0(e);break;case 92:di();break}return Uo}function q6(e,t){for(;di()&&e+ur!==57;)if(e+ur===84&&za()===47)break;return"/*"+Pv(t,Uo-1)+"*"+_y(e===47?e:di())}function X6(e){for(;!Gu(za());)di();return Pv(e,Uo)}function G6(e){return W6($p("",null,null,null,[""],e=V6(e),0,[0],e))}function $p(e,t,n,r,o,i,a,s,c){for(var u=0,p=0,v=a,h=0,m=0,b=0,y=1,w=1,C=1,S=0,E="",k=o,O=i,$=r,T=E;w;)switch(b=S,S=di()){case 40:if(b!=108&&Xu(T,v-1)==58){z6(T+=kp(nm(S),"&","&\f"),"&\f",rI(u?s[u-1]:0))!=-1&&(C=-1);break}case 34:case 39:case 91:T+=nm(S);break;case 9:case 10:case 13:case 32:T+=U6(b);break;case 92:T+=K6(Op()-1,7);continue;case 47:switch(za()){case 42:case 47:Hf(Y6(q6(di(),Op()),t,n,c),c),(Gu(b||1)==5||Gu(za()||1)==5)&&Pi(T)&&Wl(T,-1,void 0)!==" "&&(T+=" ");break;default:T+="/"}break;case 123*y:s[u++]=Pi(T)*C;case 125*y:case 59:case 0:switch(S){case 0:case 125:w=0;case 59+p:C==-1&&(T=kp(T,/\f/g,"")),m>0&&(Pi(T)-v||y===0&&b===47)&&Hf(m>32?$C(T+";",r,n,v-1,c):$C(kp(T," ","")+";",r,n,v-2,c),c);break;case 59:T+=";";default:if(Hf($=OC(T,t,n,u,p,o,s,E,k=[],O=[],v,i),i),S===123)if(p===0)$p(T,t,$,$,k,i,v,s,O);else switch(h===99&&Xu(T,3)===110?100:h){case 100:case 108:case 109:case 115:$p(e,$,$,r&&Hf(OC(e,$,$,0,0,o,s,E,o,k=[],v,O),O),o,O,v,s,r?k:O);break;default:$p(T,$,$,$,[""],O,0,s,O)}}u=p=m=0,y=C=1,E=T="",v=a;break;case 58:v=1+Pi(T),m=b;default:if(y<1){if(S==123)--y;else if(S==125&&y++==0&&_6()==125)continue}switch(T+=_y(S),S*y){case 38:C=p>0?1:(T+="\f",-1);break;case 44:s[u++]=(Pi(T)-1)*C,C=1;break;case 64:za()===45&&(T+=nm(di())),h=za(),p=v=Pi(E=T+=X6(Op())),S++;break;case 45:b===45&&Pi(T)==2&&(y=0)}}return i}function OC(e,t,n,r,o,i,a,s,c,u,p,v){for(var h=o-1,m=o===0?i:[""],b=H6(m),y=0,w=0,C=0;y0?m[S]+" "+E:kp(E,/&\f/g,m[S])))&&(c[C++]=k);return Vy(e,t,n,o===0?tI:s,c,u,p,v)}function Y6(e,t,n,r){return Vy(e,t,n,eI,_y(F6()),Wl(e,2,-2),0,r)}function $C(e,t,n,r,o){return Vy(e,t,n,nI,Wl(e,0,r),Wl(e,r+1,-1),r,o)}function z0(e,t){for(var n="",r=0;r1&&arguments[1]!==void 0?arguments[1]:{},r=arguments.length>2&&arguments[2]!==void 0?arguments[2]:{root:!0,parentSelectors:[]},o=r.root,i=r.injectHash,a=r.parentSelectors,s=n.hashId,c=n.layer;n.path;var u=n.hashPriority,p=n.transformers,v=p===void 0?[]:p;n.linters;var h="",m={};function b(C){var S=C.getName(s);if(!m[S]){var E=e(C.style,n,{root:!1,parentSelectors:a}),k=ve(E,1),O=k[0];m[S]="@keyframes ".concat(C.getName(s)).concat(O)}}function y(C){var S=arguments.length>1&&arguments[1]!==void 0?arguments[1]:[];return C.forEach(function(E){Array.isArray(E)?y(E,S):E&&S.push(E)}),S}var w=y(Array.isArray(t)?t:[t]);return w.forEach(function(C){var S=typeof C=="string"&&!o?{}:C;if(typeof S=="string")h+="".concat(S,` -`);else if(S._keyframe)b(S);else{var E=v.reduce(function(k,O){var $;return(O==null||($=O.visit)===null||$===void 0?void 0:$.call(O,k))||k},S);Object.keys(E).forEach(function(k){var O=E[k];if(st(O)==="object"&&O&&(k!=="animationName"||!O._keyframe)&&!nB(O)){var $=!1,T=k.trim(),M=!1;(o||i)&&s?T.startsWith("@")?$=!0:T==="&"?T=TC("",s,u):T=TC(k,s,u):o&&!s&&(T==="&"||T==="")&&(T="",M=!0);var P=e(O,n,{root:M,injectHash:$,parentSelectors:[].concat(Se(a),[T])}),R=ve(P,2),A=R[0],V=R[1];m=Z(Z({},m),V),h+="".concat(T).concat(A)}else{let _=function(H,j){var L=H.replace(/[A-Z]/g,function(U){return"-".concat(U.toLowerCase())}),F=j;!j6[H]&&typeof F=="number"&&F!==0&&(F="".concat(F,"px")),H==="animationName"&&j!==null&&j!==void 0&&j._keyframe&&(b(j),F=j.getName(s)),h+="".concat(L,":").concat(F,";")};var z,B=(z=O==null?void 0:O.value)!==null&&z!==void 0?z:O;st(O)==="object"&&O!==null&&O!==void 0&&O[lI]&&Array.isArray(B)?B.forEach(function(H){_(k,H)}):_(k,B)}})}}),o?c&&(h="@layer ".concat(c.name," {").concat(h,"}"),c.dependencies&&(m["@layer ".concat(c.name)]=c.dependencies.map(function(C){return"@layer ".concat(C,", ").concat(c.name,";")}).join(` -`))):h="{".concat(h,"}"),[h,m]};function cI(e,t){return Uu("".concat(e.join("%")).concat(t))}function oB(){return null}var uI="style";function H0(e,t){var n=e.token,r=e.path,o=e.hashId,i=e.layer,a=e.nonce,s=e.clientOnly,c=e.order,u=c===void 0?0:c,p=d.useContext(Iv),v=p.autoClear;p.mock;var h=p.defaultCache,m=p.hashPriority,b=p.container,y=p.ssrInline,w=p.transformers,C=p.linters,S=p.cache,E=p.layer,k=n._tokenKey,O=[k];E&&O.push("layer"),O.push.apply(O,Se(r));var $=B0,T=Fy(uI,O,function(){var V=O.join("|");if(J6(V)){var z=eB(V),B=ve(z,2),_=B[0],H=B[1];if(_)return[_,k,H,{},s,u]}var j=t(),L=rB(j,{hashId:o,hashPriority:m,layer:E?i:void 0,path:r.join("-"),transformers:w,linters:C}),F=ve(L,2),U=F[0],D=F[1],W=Ip(U),G=cI(O,W);return[W,k,G,D,s,u]},function(V,z){var B=ve(V,3),_=B[2];(z||v)&&B0&&Ku(_,{mark:ui})},function(V){var z=ve(V,4),B=z[0];z[1];var _=z[2],H=z[3];if($&&B!==aI){var j={mark:ui,prepend:E?!1:"queue",attachTo:b,priority:u},L=typeof a=="function"?a():a;L&&(j.csp={nonce:L});var F=[],U=[];Object.keys(H).forEach(function(W){W.startsWith("@layer")?F.push(W):U.push(W)}),F.forEach(function(W){ea(Ip(H[W]),"_layer-".concat(W),Z(Z({},j),{},{prepend:!0}))});var D=ea(B,_,j);D[Aa]=S.instanceId,D.setAttribute(Vl,k),U.forEach(function(W){ea(Ip(H[W]),"_effect-".concat(W),j)})}}),M=ve(T,3),P=M[0],R=M[1],A=M[2];return function(V){var z;return!y||$||!h?z=d.createElement(oB,null):z=d.createElement("style",$e({},K(K({},Vl,R),ui,A),{dangerouslySetInnerHTML:{__html:P}})),d.createElement(d.Fragment,null,z,V)}}var iB=function(t,n,r){var o=ve(t,6),i=o[0],a=o[1],s=o[2],c=o[3],u=o[4],p=o[5],v=r||{},h=v.plain;if(u)return null;var m=i,b={"data-rc-order":"prependQueue","data-rc-priority":"".concat(p)};return m=Up(i,a,s,b,h),c&&Object.keys(c).forEach(function(y){if(!n[y]){n[y]=!0;var w=Ip(c[y]),C=Up(w,a,"_effect-".concat(y),b,h);y.startsWith("@layer")?m=C+m:m+=C}}),[p,s,m]},dI="cssVar",aB=function(t,n){var r=t.key,o=t.prefix,i=t.unitless,a=t.ignore,s=t.token,c=t.scope,u=c===void 0?"":c,p=d.useContext(Iv),v=p.cache.instanceId,h=p.container,m=s._tokenKey,b=[].concat(Se(t.path),[r,u,m]),y=Fy(dI,b,function(){var w=n(),C=Q2(w,r,{prefix:o,unitless:i,ignore:a,scope:u}),S=ve(C,2),E=S[0],k=S[1],O=cI(b,k);return[E,k,O,r]},function(w){var C=ve(w,3),S=C[2];B0&&Ku(S,{mark:ui})},function(w){var C=ve(w,3),S=C[1],E=C[2];if(S){var k=ea(S,E,{mark:ui,prepend:"queue",attachTo:h,priority:-999});k[Aa]=v,k.setAttribute(Vl,r)}});return y},sB=function(t,n,r){var o=ve(t,4),i=o[1],a=o[2],s=o[3],c=r||{},u=c.plain;if(!i)return null;var p=-999,v={"data-rc-order":"prependQueue","data-rc-priority":"".concat(p)},h=Up(i,s,a,v,u);return[p,a,h]};K(K(K({},uI,iB),J2,D6),dI,sB);var fn=function(){function e(t,n){Kn(this,e),K(this,"name",void 0),K(this,"style",void 0),K(this,"_keyframe",!0),this.name=t,this.style=n}return qn(e,[{key:"getName",value:function(){var n=arguments.length>0&&arguments[0]!==void 0?arguments[0]:"";return n?"".concat(n,"-").concat(this.name):this.name}}]),e}();function ll(e){return e.notSplit=!0,e}ll(["borderTop","borderBottom"]),ll(["borderTop"]),ll(["borderBottom"]),ll(["borderLeft","borderRight"]),ll(["borderLeft"]),ll(["borderRight"]);var Wy=d.createContext({});function fI(e){return U2(e)||F2(e)||By(e)||K2()}function bo(e,t){for(var n=e,r=0;r3&&arguments[3]!==void 0?arguments[3]:!1;return t.length&&r&&n===void 0&&!bo(e,t.slice(0,-1))?e:pI(e,t,n,r)}function lB(e){return st(e)==="object"&&e!==null&&Object.getPrototypeOf(e)===Object.prototype}function PC(e){return Array.isArray(e)?[]:{}}var cB=typeof Reflect>"u"?Object.keys:Reflect.ownKeys;function Cl(){for(var e=arguments.length,t=new Array(e),n=0;n{const e=()=>{};return e.deprecated=uB,e},vI=d.createContext(void 0);var hI={items_per_page:"/ page",jump_to:"Go to",jump_to_confirm:"confirm",page:"Page",prev_page:"Previous Page",next_page:"Next Page",prev_5:"Previous 5 Pages",next_5:"Next 5 Pages",prev_3:"Previous 3 Pages",next_3:"Next 3 Pages",page_size:"Page Size"},fB={yearFormat:"YYYY",dayFormat:"D",cellMeridiemFormat:"A",monthBeforeYear:!0},pB=Z(Z({},fB),{},{locale:"en_US",today:"Today",now:"Now",backToToday:"Back to today",ok:"OK",clear:"Clear",month:"Month",year:"Year",timeSelect:"select time",dateSelect:"select date",weekSelect:"Choose a week",monthSelect:"Choose a month",yearSelect:"Choose a year",decadeSelect:"Choose a decade",dateFormat:"M/D/YYYY",dateTimeFormat:"M/D/YYYY HH:mm:ss",previousMonth:"Previous month (PageUp)",nextMonth:"Next month (PageDown)",previousYear:"Last year (Control + left)",nextYear:"Next year (Control + right)",previousDecade:"Last decade",nextDecade:"Next decade",previousCentury:"Last century",nextCentury:"Next century"});const gI={placeholder:"Select time",rangePlaceholder:["Start time","End time"]},MC={lang:Object.assign({placeholder:"Select date",yearPlaceholder:"Select year",quarterPlaceholder:"Select quarter",monthPlaceholder:"Select month",weekPlaceholder:"Select week",rangePlaceholder:["Start date","End date"],rangeYearPlaceholder:["Start year","End year"],rangeQuarterPlaceholder:["Start quarter","End quarter"],rangeMonthPlaceholder:["Start month","End month"],rangeWeekPlaceholder:["Start week","End week"]},pB),timePickerLocale:Object.assign({},gI)},vo="${label} is not a valid ${type}",hi={locale:"en",Pagination:hI,DatePicker:MC,TimePicker:gI,Calendar:MC,global:{placeholder:"Please select"},Table:{filterTitle:"Filter menu",filterConfirm:"OK",filterReset:"Reset",filterEmptyText:"No filters",filterCheckall:"Select all items",filterSearchPlaceholder:"Search in filters",emptyText:"No data",selectAll:"Select current page",selectInvert:"Invert current page",selectNone:"Clear all data",selectionAll:"Select all data",sortTitle:"Sort",expand:"Expand row",collapse:"Collapse row",triggerDesc:"Click to sort descending",triggerAsc:"Click to sort ascending",cancelSort:"Click to cancel sorting"},Tour:{Next:"Next",Previous:"Previous",Finish:"Finish"},Modal:{okText:"OK",cancelText:"Cancel",justOkText:"OK"},Popconfirm:{okText:"OK",cancelText:"Cancel"},Transfer:{titles:["",""],searchPlaceholder:"Search here",itemUnit:"item",itemsUnit:"items",remove:"Remove",selectCurrent:"Select current page",removeCurrent:"Remove current page",selectAll:"Select all data",deselectAll:"Deselect all data",removeAll:"Remove all data",selectInvert:"Invert current page"},Upload:{uploading:"Uploading...",removeFile:"Remove file",uploadError:"Upload error",previewFile:"Preview file",downloadFile:"Download file"},Empty:{description:"No data"},Icon:{icon:"icon"},Text:{edit:"Edit",copy:"Copy",copied:"Copied",expand:"Expand",collapse:"Collapse"},Form:{optional:"(optional)",defaultValidateMessages:{default:"Field validation error for ${label}",required:"Please enter ${label}",enum:"${label} must be one of [${enum}]",whitespace:"${label} cannot be a blank character",date:{format:"${label} date format is invalid",parse:"${label} cannot be converted to a date",invalid:"${label} is an invalid date"},types:{string:vo,method:vo,array:vo,object:vo,number:vo,date:vo,boolean:vo,integer:vo,float:vo,regexp:vo,email:vo,url:vo,hex:vo},string:{len:"${label} must be ${len} characters",min:"${label} must be at least ${min} characters",max:"${label} must be up to ${max} characters",range:"${label} must be between ${min}-${max} characters"},number:{len:"${label} must be equal to ${len}",min:"${label} must be minimum ${min}",max:"${label} must be maximum ${max}",range:"${label} must be between ${min}-${max}"},array:{len:"Must be ${len} ${label}",min:"At least ${min} ${label}",max:"At most ${max} ${label}",range:"The amount of ${label} must be between ${min}-${max}"},pattern:{mismatch:"${label} does not match the pattern ${pattern}"}}},Image:{preview:"Preview"},QRCode:{expired:"QR code expired",refresh:"Refresh",scanned:"Scanned"},ColorPicker:{presetEmpty:"Empty",transparent:"Transparent",singleColor:"Single",gradientColor:"Gradient"}};let Tp=Object.assign({},hi.Modal),Pp=[];const NC=()=>Pp.reduce((e,t)=>Object.assign(Object.assign({},e),t),hi.Modal);function vB(e){if(e){const t=Object.assign({},e);return Pp.push(t),Tp=NC(),()=>{Pp=Pp.filter(n=>n!==t),Tp=NC()}}Tp=Object.assign({},hi.Modal)}function mI(){return Tp}const Uy=d.createContext(void 0),bi=(e,t)=>{const n=d.useContext(Uy),r=d.useMemo(()=>{var i;const a=t||hi[e],s=(i=n==null?void 0:n[e])!==null&&i!==void 0?i:{};return Object.assign(Object.assign({},typeof a=="function"?a():a),s||{})},[e,t,n]),o=d.useMemo(()=>{const i=n==null?void 0:n.locale;return n!=null&&n.exist&&!i?hi.locale:i},[n]);return[r,o]},hB="internalMark",gB=e=>{const{locale:t={},children:n,_ANT_MARK__:r}=e;d.useEffect(()=>vB(t==null?void 0:t.Modal),[t]);const o=d.useMemo(()=>Object.assign(Object.assign({},t),{exist:!0}),[t]);return d.createElement(Uy.Provider,{value:o},n)};function Rr(e,t){mB(e)&&(e="100%");var n=bB(e);return e=t===360?e:Math.min(t,Math.max(0,parseFloat(e))),n&&(e=parseInt(String(e*t),10)/100),Math.abs(e-t)<1e-6?1:(t===360?e=(e<0?e%t+t:e%t)/parseFloat(String(t)):e=e%t/parseFloat(String(t)),e)}function Ff(e){return Math.min(1,Math.max(0,e))}function mB(e){return typeof e=="string"&&e.indexOf(".")!==-1&&parseFloat(e)===1}function bB(e){return typeof e=="string"&&e.indexOf("%")!==-1}function bI(e){return e=parseFloat(e),(isNaN(e)||e<0||e>1)&&(e=1),e}function _f(e){return e<=1?"".concat(Number(e)*100,"%"):e}function bs(e){return e.length===1?"0"+e:String(e)}function yB(e,t,n){return{r:Rr(e,255)*255,g:Rr(t,255)*255,b:Rr(n,255)*255}}function RC(e,t,n){e=Rr(e,255),t=Rr(t,255),n=Rr(n,255);var r=Math.max(e,t,n),o=Math.min(e,t,n),i=0,a=0,s=(r+o)/2;if(r===o)a=0,i=0;else{var c=r-o;switch(a=s>.5?c/(2-r-o):c/(r+o),r){case e:i=(t-n)/c+(t1&&(n-=1),n<1/6?e+(t-e)*(6*n):n<1/2?t:n<2/3?e+(t-e)*(2/3-n)*6:e}function wB(e,t,n){var r,o,i;if(e=Rr(e,360),t=Rr(t,100),n=Rr(n,100),t===0)o=n,i=n,r=n;else{var a=n<.5?n*(1+t):n+t-n*t,s=2*n-a;r=rm(s,a,e+1/3),o=rm(s,a,e),i=rm(s,a,e-1/3)}return{r:r*255,g:o*255,b:i*255}}function F0(e,t,n){e=Rr(e,255),t=Rr(t,255),n=Rr(n,255);var r=Math.max(e,t,n),o=Math.min(e,t,n),i=0,a=r,s=r-o,c=r===0?0:s/r;if(r===o)i=0;else{switch(r){case e:i=(t-n)/s+(t>16,g:(e&65280)>>8,b:e&255}}var V0={aliceblue:"#f0f8ff",antiquewhite:"#faebd7",aqua:"#00ffff",aquamarine:"#7fffd4",azure:"#f0ffff",beige:"#f5f5dc",bisque:"#ffe4c4",black:"#000000",blanchedalmond:"#ffebcd",blue:"#0000ff",blueviolet:"#8a2be2",brown:"#a52a2a",burlywood:"#deb887",cadetblue:"#5f9ea0",chartreuse:"#7fff00",chocolate:"#d2691e",coral:"#ff7f50",cornflowerblue:"#6495ed",cornsilk:"#fff8dc",crimson:"#dc143c",cyan:"#00ffff",darkblue:"#00008b",darkcyan:"#008b8b",darkgoldenrod:"#b8860b",darkgray:"#a9a9a9",darkgreen:"#006400",darkgrey:"#a9a9a9",darkkhaki:"#bdb76b",darkmagenta:"#8b008b",darkolivegreen:"#556b2f",darkorange:"#ff8c00",darkorchid:"#9932cc",darkred:"#8b0000",darksalmon:"#e9967a",darkseagreen:"#8fbc8f",darkslateblue:"#483d8b",darkslategray:"#2f4f4f",darkslategrey:"#2f4f4f",darkturquoise:"#00ced1",darkviolet:"#9400d3",deeppink:"#ff1493",deepskyblue:"#00bfff",dimgray:"#696969",dimgrey:"#696969",dodgerblue:"#1e90ff",firebrick:"#b22222",floralwhite:"#fffaf0",forestgreen:"#228b22",fuchsia:"#ff00ff",gainsboro:"#dcdcdc",ghostwhite:"#f8f8ff",goldenrod:"#daa520",gold:"#ffd700",gray:"#808080",green:"#008000",greenyellow:"#adff2f",grey:"#808080",honeydew:"#f0fff0",hotpink:"#ff69b4",indianred:"#cd5c5c",indigo:"#4b0082",ivory:"#fffff0",khaki:"#f0e68c",lavenderblush:"#fff0f5",lavender:"#e6e6fa",lawngreen:"#7cfc00",lemonchiffon:"#fffacd",lightblue:"#add8e6",lightcoral:"#f08080",lightcyan:"#e0ffff",lightgoldenrodyellow:"#fafad2",lightgray:"#d3d3d3",lightgreen:"#90ee90",lightgrey:"#d3d3d3",lightpink:"#ffb6c1",lightsalmon:"#ffa07a",lightseagreen:"#20b2aa",lightskyblue:"#87cefa",lightslategray:"#778899",lightslategrey:"#778899",lightsteelblue:"#b0c4de",lightyellow:"#ffffe0",lime:"#00ff00",limegreen:"#32cd32",linen:"#faf0e6",magenta:"#ff00ff",maroon:"#800000",mediumaquamarine:"#66cdaa",mediumblue:"#0000cd",mediumorchid:"#ba55d3",mediumpurple:"#9370db",mediumseagreen:"#3cb371",mediumslateblue:"#7b68ee",mediumspringgreen:"#00fa9a",mediumturquoise:"#48d1cc",mediumvioletred:"#c71585",midnightblue:"#191970",mintcream:"#f5fffa",mistyrose:"#ffe4e1",moccasin:"#ffe4b5",navajowhite:"#ffdead",navy:"#000080",oldlace:"#fdf5e6",olive:"#808000",olivedrab:"#6b8e23",orange:"#ffa500",orangered:"#ff4500",orchid:"#da70d6",palegoldenrod:"#eee8aa",palegreen:"#98fb98",paleturquoise:"#afeeee",palevioletred:"#db7093",papayawhip:"#ffefd5",peachpuff:"#ffdab9",peru:"#cd853f",pink:"#ffc0cb",plum:"#dda0dd",powderblue:"#b0e0e6",purple:"#800080",rebeccapurple:"#663399",red:"#ff0000",rosybrown:"#bc8f8f",royalblue:"#4169e1",saddlebrown:"#8b4513",salmon:"#fa8072",sandybrown:"#f4a460",seagreen:"#2e8b57",seashell:"#fff5ee",sienna:"#a0522d",silver:"#c0c0c0",skyblue:"#87ceeb",slateblue:"#6a5acd",slategray:"#708090",slategrey:"#708090",snow:"#fffafa",springgreen:"#00ff7f",steelblue:"#4682b4",tan:"#d2b48c",teal:"#008080",thistle:"#d8bfd8",tomato:"#ff6347",turquoise:"#40e0d0",violet:"#ee82ee",wheat:"#f5deb3",white:"#ffffff",whitesmoke:"#f5f5f5",yellow:"#ffff00",yellowgreen:"#9acd32"};function Sl(e){var t={r:0,g:0,b:0},n=1,r=null,o=null,i=null,a=!1,s=!1;return typeof e=="string"&&(e=$B(e)),typeof e=="object"&&(Gi(e.r)&&Gi(e.g)&&Gi(e.b)?(t=yB(e.r,e.g,e.b),a=!0,s=String(e.r).substr(-1)==="%"?"prgb":"rgb"):Gi(e.h)&&Gi(e.s)&&Gi(e.v)?(r=_f(e.s),o=_f(e.v),t=xB(e.h,r,o),a=!0,s="hsv"):Gi(e.h)&&Gi(e.s)&&Gi(e.l)&&(r=_f(e.s),i=_f(e.l),t=wB(e.h,r,i),a=!0,s="hsl"),Object.prototype.hasOwnProperty.call(e,"a")&&(n=e.a)),n=bI(n),{ok:a,format:e.format||s,r:Math.min(255,Math.max(t.r,0)),g:Math.min(255,Math.max(t.g,0)),b:Math.min(255,Math.max(t.b,0)),a:n}}var kB="[-\\+]?\\d+%?",OB="[-\\+]?\\d*\\.\\d+%?",Ha="(?:".concat(OB,")|(?:").concat(kB,")"),om="[\\s|\\(]+(".concat(Ha,")[,|\\s]+(").concat(Ha,")[,|\\s]+(").concat(Ha,")\\s*\\)?"),im="[\\s|\\(]+(".concat(Ha,")[,|\\s]+(").concat(Ha,")[,|\\s]+(").concat(Ha,")[,|\\s]+(").concat(Ha,")\\s*\\)?"),ii={CSS_UNIT:new RegExp(Ha),rgb:new RegExp("rgb"+om),rgba:new RegExp("rgba"+im),hsl:new RegExp("hsl"+om),hsla:new RegExp("hsla"+im),hsv:new RegExp("hsv"+om),hsva:new RegExp("hsva"+im),hex3:/^#?([0-9a-fA-F]{1})([0-9a-fA-F]{1})([0-9a-fA-F]{1})$/,hex6:/^#?([0-9a-fA-F]{2})([0-9a-fA-F]{2})([0-9a-fA-F]{2})$/,hex4:/^#?([0-9a-fA-F]{1})([0-9a-fA-F]{1})([0-9a-fA-F]{1})([0-9a-fA-F]{1})$/,hex8:/^#?([0-9a-fA-F]{2})([0-9a-fA-F]{2})([0-9a-fA-F]{2})([0-9a-fA-F]{2})$/};function $B(e){if(e=e.trim().toLowerCase(),e.length===0)return!1;var t=!1;if(V0[e])e=V0[e],t=!0;else if(e==="transparent")return{r:0,g:0,b:0,a:0,format:"name"};var n=ii.rgb.exec(e);return n?{r:n[1],g:n[2],b:n[3]}:(n=ii.rgba.exec(e),n?{r:n[1],g:n[2],b:n[3],a:n[4]}:(n=ii.hsl.exec(e),n?{h:n[1],s:n[2],l:n[3]}:(n=ii.hsla.exec(e),n?{h:n[1],s:n[2],l:n[3],a:n[4]}:(n=ii.hsv.exec(e),n?{h:n[1],s:n[2],v:n[3]}:(n=ii.hsva.exec(e),n?{h:n[1],s:n[2],v:n[3],a:n[4]}:(n=ii.hex8.exec(e),n?{r:mo(n[1]),g:mo(n[2]),b:mo(n[3]),a:DC(n[4]),format:t?"name":"hex8"}:(n=ii.hex6.exec(e),n?{r:mo(n[1]),g:mo(n[2]),b:mo(n[3]),format:t?"name":"hex"}:(n=ii.hex4.exec(e),n?{r:mo(n[1]+n[1]),g:mo(n[2]+n[2]),b:mo(n[3]+n[3]),a:DC(n[4]+n[4]),format:t?"name":"hex8"}:(n=ii.hex3.exec(e),n?{r:mo(n[1]+n[1]),g:mo(n[2]+n[2]),b:mo(n[3]+n[3]),format:t?"name":"hex"}:!1)))))))))}function Gi(e){return!!ii.CSS_UNIT.exec(String(e))}var xn=function(){function e(t,n){t===void 0&&(t=""),n===void 0&&(n={});var r;if(t instanceof e)return t;typeof t=="number"&&(t=EB(t)),this.originalInput=t;var o=Sl(t);this.originalInput=t,this.r=o.r,this.g=o.g,this.b=o.b,this.a=o.a,this.roundA=Math.round(100*this.a)/100,this.format=(r=n.format)!==null&&r!==void 0?r:o.format,this.gradientType=n.gradientType,this.r<1&&(this.r=Math.round(this.r)),this.g<1&&(this.g=Math.round(this.g)),this.b<1&&(this.b=Math.round(this.b)),this.isValid=o.ok}return e.prototype.isDark=function(){return this.getBrightness()<128},e.prototype.isLight=function(){return!this.isDark()},e.prototype.getBrightness=function(){var t=this.toRgb();return(t.r*299+t.g*587+t.b*114)/1e3},e.prototype.getLuminance=function(){var t=this.toRgb(),n,r,o,i=t.r/255,a=t.g/255,s=t.b/255;return i<=.03928?n=i/12.92:n=Math.pow((i+.055)/1.055,2.4),a<=.03928?r=a/12.92:r=Math.pow((a+.055)/1.055,2.4),s<=.03928?o=s/12.92:o=Math.pow((s+.055)/1.055,2.4),.2126*n+.7152*r+.0722*o},e.prototype.getAlpha=function(){return this.a},e.prototype.setAlpha=function(t){return this.a=bI(t),this.roundA=Math.round(100*this.a)/100,this},e.prototype.isMonochrome=function(){var t=this.toHsl().s;return t===0},e.prototype.toHsv=function(){var t=F0(this.r,this.g,this.b);return{h:t.h*360,s:t.s,v:t.v,a:this.a}},e.prototype.toHsvString=function(){var t=F0(this.r,this.g,this.b),n=Math.round(t.h*360),r=Math.round(t.s*100),o=Math.round(t.v*100);return this.a===1?"hsv(".concat(n,", ").concat(r,"%, ").concat(o,"%)"):"hsva(".concat(n,", ").concat(r,"%, ").concat(o,"%, ").concat(this.roundA,")")},e.prototype.toHsl=function(){var t=RC(this.r,this.g,this.b);return{h:t.h*360,s:t.s,l:t.l,a:this.a}},e.prototype.toHslString=function(){var t=RC(this.r,this.g,this.b),n=Math.round(t.h*360),r=Math.round(t.s*100),o=Math.round(t.l*100);return this.a===1?"hsl(".concat(n,", ").concat(r,"%, ").concat(o,"%)"):"hsla(".concat(n,", ").concat(r,"%, ").concat(o,"%, ").concat(this.roundA,")")},e.prototype.toHex=function(t){return t===void 0&&(t=!1),_0(this.r,this.g,this.b,t)},e.prototype.toHexString=function(t){return t===void 0&&(t=!1),"#"+this.toHex(t)},e.prototype.toHex8=function(t){return t===void 0&&(t=!1),SB(this.r,this.g,this.b,this.a,t)},e.prototype.toHex8String=function(t){return t===void 0&&(t=!1),"#"+this.toHex8(t)},e.prototype.toHexShortString=function(t){return t===void 0&&(t=!1),this.a===1?this.toHexString(t):this.toHex8String(t)},e.prototype.toRgb=function(){return{r:Math.round(this.r),g:Math.round(this.g),b:Math.round(this.b),a:this.a}},e.prototype.toRgbString=function(){var t=Math.round(this.r),n=Math.round(this.g),r=Math.round(this.b);return this.a===1?"rgb(".concat(t,", ").concat(n,", ").concat(r,")"):"rgba(".concat(t,", ").concat(n,", ").concat(r,", ").concat(this.roundA,")")},e.prototype.toPercentageRgb=function(){var t=function(n){return"".concat(Math.round(Rr(n,255)*100),"%")};return{r:t(this.r),g:t(this.g),b:t(this.b),a:this.a}},e.prototype.toPercentageRgbString=function(){var t=function(n){return Math.round(Rr(n,255)*100)};return this.a===1?"rgb(".concat(t(this.r),"%, ").concat(t(this.g),"%, ").concat(t(this.b),"%)"):"rgba(".concat(t(this.r),"%, ").concat(t(this.g),"%, ").concat(t(this.b),"%, ").concat(this.roundA,")")},e.prototype.toName=function(){if(this.a===0)return"transparent";if(this.a<1)return!1;for(var t="#"+_0(this.r,this.g,this.b,!1),n=0,r=Object.entries(V0);n=0,i=!n&&o&&(t.startsWith("hex")||t==="name");return i?t==="name"&&this.a===0?this.toName():this.toRgbString():(t==="rgb"&&(r=this.toRgbString()),t==="prgb"&&(r=this.toPercentageRgbString()),(t==="hex"||t==="hex6")&&(r=this.toHexString()),t==="hex3"&&(r=this.toHexString(!0)),t==="hex4"&&(r=this.toHex8String(!0)),t==="hex8"&&(r=this.toHex8String()),t==="name"&&(r=this.toName()),t==="hsl"&&(r=this.toHslString()),t==="hsv"&&(r=this.toHsvString()),r||this.toHexString())},e.prototype.toNumber=function(){return(Math.round(this.r)<<16)+(Math.round(this.g)<<8)+Math.round(this.b)},e.prototype.clone=function(){return new e(this.toString())},e.prototype.lighten=function(t){t===void 0&&(t=10);var n=this.toHsl();return n.l+=t/100,n.l=Ff(n.l),new e(n)},e.prototype.brighten=function(t){t===void 0&&(t=10);var n=this.toRgb();return n.r=Math.max(0,Math.min(255,n.r-Math.round(255*(-t/100)))),n.g=Math.max(0,Math.min(255,n.g-Math.round(255*(-t/100)))),n.b=Math.max(0,Math.min(255,n.b-Math.round(255*(-t/100)))),new e(n)},e.prototype.darken=function(t){t===void 0&&(t=10);var n=this.toHsl();return n.l-=t/100,n.l=Ff(n.l),new e(n)},e.prototype.tint=function(t){return t===void 0&&(t=10),this.mix("white",t)},e.prototype.shade=function(t){return t===void 0&&(t=10),this.mix("black",t)},e.prototype.desaturate=function(t){t===void 0&&(t=10);var n=this.toHsl();return n.s-=t/100,n.s=Ff(n.s),new e(n)},e.prototype.saturate=function(t){t===void 0&&(t=10);var n=this.toHsl();return n.s+=t/100,n.s=Ff(n.s),new e(n)},e.prototype.greyscale=function(){return this.desaturate(100)},e.prototype.spin=function(t){var n=this.toHsl(),r=(n.h+t)%360;return n.h=r<0?360+r:r,new e(n)},e.prototype.mix=function(t,n){n===void 0&&(n=50);var r=this.toRgb(),o=new e(t).toRgb(),i=n/100,a={r:(o.r-r.r)*i+r.r,g:(o.g-r.g)*i+r.g,b:(o.b-r.b)*i+r.b,a:(o.a-r.a)*i+r.a};return new e(a)},e.prototype.analogous=function(t,n){t===void 0&&(t=6),n===void 0&&(n=30);var r=this.toHsl(),o=360/n,i=[this];for(r.h=(r.h-(o*t>>1)+720)%360;--t;)r.h=(r.h+o)%360,i.push(new e(r));return i},e.prototype.complement=function(){var t=this.toHsl();return t.h=(t.h+180)%360,new e(t)},e.prototype.monochromatic=function(t){t===void 0&&(t=6);for(var n=this.toHsv(),r=n.h,o=n.s,i=n.v,a=[],s=1/t;t--;)a.push(new e({h:r,s:o,v:i})),i=(i+s)%1;return a},e.prototype.splitcomplement=function(){var t=this.toHsl(),n=t.h;return[this,new e({h:(n+72)%360,s:t.s,l:t.l}),new e({h:(n+216)%360,s:t.s,l:t.l})]},e.prototype.onBackground=function(t){var n=this.toRgb(),r=new e(t).toRgb(),o=n.a+r.a*(1-n.a);return new e({r:(n.r*n.a+r.r*r.a*(1-n.a))/o,g:(n.g*n.a+r.g*r.a*(1-n.a))/o,b:(n.b*n.a+r.b*r.a*(1-n.a))/o,a:o})},e.prototype.triad=function(){return this.polyad(3)},e.prototype.tetrad=function(){return this.polyad(4)},e.prototype.polyad=function(t){for(var n=this.toHsl(),r=n.h,o=[this],i=360/t,a=1;a=60&&Math.round(e.h)<=240?r=n?Math.round(e.h)-Vf*t:Math.round(e.h)+Vf*t:r=n?Math.round(e.h)+Vf*t:Math.round(e.h)-Vf*t,r<0?r+=360:r>=360&&(r-=360),r}function AC(e,t,n){if(e.h===0&&e.s===0)return e.s;var r;return n?r=e.s-jC*t:t===wI?r=e.s+jC:r=e.s+IB*t,r>1&&(r=1),n&&t===yI&&r>.1&&(r=.1),r<.06&&(r=.06),Number(r.toFixed(2))}function zC(e,t,n){var r;return n?r=e.v+TB*t:r=e.v-PB*t,r>1&&(r=1),Number(r.toFixed(2))}function $s(e){for(var t=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{},n=[],r=Sl(e),o=yI;o>0;o-=1){var i=LC(r),a=Wf(Sl({h:BC(i,o,!0),s:AC(i,o,!0),v:zC(i,o,!0)}));n.push(a)}n.push(Wf(r));for(var s=1;s<=wI;s+=1){var c=LC(r),u=Wf(Sl({h:BC(c,s),s:AC(c,s),v:zC(c,s)}));n.push(u)}return t.theme==="dark"?MB.map(function(p){var v=p.index,h=p.opacity,m=Wf(NB(Sl(t.backgroundColor||"#141414"),Sl(n[v]),h*100));return m}):n}var Tl={red:"#F5222D",volcano:"#FA541C",orange:"#FA8C16",gold:"#FAAD14",yellow:"#FADB14",lime:"#A0D911",green:"#52C41A",cyan:"#13C2C2",blue:"#1677FF",geekblue:"#2F54EB",purple:"#722ED1",magenta:"#EB2F96",grey:"#666666"},W0=["#fff1f0","#ffccc7","#ffa39e","#ff7875","#ff4d4f","#f5222d","#cf1322","#a8071a","#820014","#5c0011"];W0.primary=W0[5];var U0=["#fff2e8","#ffd8bf","#ffbb96","#ff9c6e","#ff7a45","#fa541c","#d4380d","#ad2102","#871400","#610b00"];U0.primary=U0[5];var K0=["#fff7e6","#ffe7ba","#ffd591","#ffc069","#ffa940","#fa8c16","#d46b08","#ad4e00","#873800","#612500"];K0.primary=K0[5];var Kp=["#fffbe6","#fff1b8","#ffe58f","#ffd666","#ffc53d","#faad14","#d48806","#ad6800","#874d00","#613400"];Kp.primary=Kp[5];var q0=["#feffe6","#ffffb8","#fffb8f","#fff566","#ffec3d","#fadb14","#d4b106","#ad8b00","#876800","#614700"];q0.primary=q0[5];var X0=["#fcffe6","#f4ffb8","#eaff8f","#d3f261","#bae637","#a0d911","#7cb305","#5b8c00","#3f6600","#254000"];X0.primary=X0[5];var G0=["#f6ffed","#d9f7be","#b7eb8f","#95de64","#73d13d","#52c41a","#389e0d","#237804","#135200","#092b00"];G0.primary=G0[5];var Y0=["#e6fffb","#b5f5ec","#87e8de","#5cdbd3","#36cfc9","#13c2c2","#08979c","#006d75","#00474f","#002329"];Y0.primary=Y0[5];var Kl=["#e6f4ff","#bae0ff","#91caff","#69b1ff","#4096ff","#1677ff","#0958d9","#003eb3","#002c8c","#001d66"];Kl.primary=Kl[5];var Q0=["#f0f5ff","#d6e4ff","#adc6ff","#85a5ff","#597ef7","#2f54eb","#1d39c4","#10239e","#061178","#030852"];Q0.primary=Q0[5];var Z0=["#f9f0ff","#efdbff","#d3adf7","#b37feb","#9254de","#722ed1","#531dab","#391085","#22075e","#120338"];Z0.primary=Z0[5];var J0=["#fff0f6","#ffd6e7","#ffadd2","#ff85c0","#f759ab","#eb2f96","#c41d7f","#9e1068","#780650","#520339"];J0.primary=J0[5];var eb=["#a6a6a6","#999999","#8c8c8c","#808080","#737373","#666666","#404040","#1a1a1a","#000000","#000000"];eb.primary=eb[5];var am={red:W0,volcano:U0,orange:K0,gold:Kp,yellow:q0,lime:X0,green:G0,cyan:Y0,blue:Kl,geekblue:Q0,purple:Z0,magenta:J0,grey:eb};const Ky={blue:"#1677FF",purple:"#722ED1",cyan:"#13C2C2",green:"#52C41A",magenta:"#EB2F96",pink:"#EB2F96",red:"#F5222D",orange:"#FA8C16",yellow:"#FADB14",volcano:"#FA541C",geekblue:"#2F54EB",gold:"#FAAD14",lime:"#A0D911"},ql=Object.assign(Object.assign({},Ky),{colorPrimary:"#1677ff",colorSuccess:"#52c41a",colorWarning:"#faad14",colorError:"#ff4d4f",colorInfo:"#1677ff",colorLink:"",colorTextBase:"",colorBgBase:"",fontFamily:`-apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, 'Helvetica Neue', Arial, -'Noto Sans', sans-serif, 'Apple Color Emoji', 'Segoe UI Emoji', 'Segoe UI Symbol', -'Noto Color Emoji'`,fontFamilyCode:"'SFMono-Regular', Consolas, 'Liberation Mono', Menlo, Courier, monospace",fontSize:14,lineWidth:1,lineType:"solid",motionUnit:.1,motionBase:0,motionEaseOutCirc:"cubic-bezier(0.08, 0.82, 0.17, 1)",motionEaseInOutCirc:"cubic-bezier(0.78, 0.14, 0.15, 0.86)",motionEaseOut:"cubic-bezier(0.215, 0.61, 0.355, 1)",motionEaseInOut:"cubic-bezier(0.645, 0.045, 0.355, 1)",motionEaseOutBack:"cubic-bezier(0.12, 0.4, 0.29, 1.46)",motionEaseInBack:"cubic-bezier(0.71, -0.46, 0.88, 0.6)",motionEaseInQuint:"cubic-bezier(0.755, 0.05, 0.855, 0.06)",motionEaseOutQuint:"cubic-bezier(0.23, 1, 0.32, 1)",borderRadius:6,sizeUnit:4,sizeStep:4,sizePopupArrow:16,controlHeight:32,zIndexBase:0,zIndexPopupBase:1e3,opacityImage:1,wireframe:!1,motion:!0});function xI(e,t){let{generateColorPalettes:n,generateNeutralColorPalettes:r}=t;const{colorSuccess:o,colorWarning:i,colorError:a,colorInfo:s,colorPrimary:c,colorBgBase:u,colorTextBase:p}=e,v=n(c),h=n(o),m=n(i),b=n(a),y=n(s),w=r(u,p),C=e.colorLink||e.colorInfo,S=n(C),E=new xn(b[1]).mix(new xn(b[3]),50).toHexString();return Object.assign(Object.assign({},w),{colorPrimaryBg:v[1],colorPrimaryBgHover:v[2],colorPrimaryBorder:v[3],colorPrimaryBorderHover:v[4],colorPrimaryHover:v[5],colorPrimary:v[6],colorPrimaryActive:v[7],colorPrimaryTextHover:v[8],colorPrimaryText:v[9],colorPrimaryTextActive:v[10],colorSuccessBg:h[1],colorSuccessBgHover:h[2],colorSuccessBorder:h[3],colorSuccessBorderHover:h[4],colorSuccessHover:h[4],colorSuccess:h[6],colorSuccessActive:h[7],colorSuccessTextHover:h[8],colorSuccessText:h[9],colorSuccessTextActive:h[10],colorErrorBg:b[1],colorErrorBgHover:b[2],colorErrorBgFilledHover:E,colorErrorBgActive:b[3],colorErrorBorder:b[3],colorErrorBorderHover:b[4],colorErrorHover:b[5],colorError:b[6],colorErrorActive:b[7],colorErrorTextHover:b[8],colorErrorText:b[9],colorErrorTextActive:b[10],colorWarningBg:m[1],colorWarningBgHover:m[2],colorWarningBorder:m[3],colorWarningBorderHover:m[4],colorWarningHover:m[4],colorWarning:m[6],colorWarningActive:m[7],colorWarningTextHover:m[8],colorWarningText:m[9],colorWarningTextActive:m[10],colorInfoBg:y[1],colorInfoBgHover:y[2],colorInfoBorder:y[3],colorInfoBorderHover:y[4],colorInfoHover:y[4],colorInfo:y[6],colorInfoActive:y[7],colorInfoTextHover:y[8],colorInfoText:y[9],colorInfoTextActive:y[10],colorLinkHover:S[4],colorLink:S[6],colorLinkActive:S[7],colorBgMask:new xn("#000").setAlpha(.45).toRgbString(),colorWhite:"#fff"})}const RB=e=>{let t=e,n=e,r=e,o=e;return e<6&&e>=5?t=e+1:e<16&&e>=6?t=e+2:e>=16&&(t=16),e<7&&e>=5?n=4:e<8&&e>=7?n=5:e<14&&e>=8?n=6:e<16&&e>=14?n=7:e>=16&&(n=8),e<6&&e>=2?r=1:e>=6&&(r=2),e>4&&e<8?o=4:e>=8&&(o=6),{borderRadius:e,borderRadiusXS:r,borderRadiusSM:n,borderRadiusLG:t,borderRadiusOuter:o}};function DB(e){const{motionUnit:t,motionBase:n,borderRadius:r,lineWidth:o}=e;return Object.assign({motionDurationFast:`${(n+t).toFixed(1)}s`,motionDurationMid:`${(n+t*2).toFixed(1)}s`,motionDurationSlow:`${(n+t*3).toFixed(1)}s`,lineWidthBold:o+1},RB(r))}const SI=e=>{const{controlHeight:t}=e;return{controlHeightSM:t*.75,controlHeightXS:t*.5,controlHeightLG:t*1.25}};function Mp(e){return(e+8)/e}function jB(e){const t=new Array(10).fill(null).map((n,r)=>{const o=r-1,i=e*Math.pow(Math.E,o/5),a=r>1?Math.floor(i):Math.ceil(i);return Math.floor(a/2)*2});return t[1]=e,t.map(n=>({size:n,lineHeight:Mp(n)}))}const CI=e=>{const t=jB(e),n=t.map(p=>p.size),r=t.map(p=>p.lineHeight),o=n[1],i=n[0],a=n[2],s=r[1],c=r[0],u=r[2];return{fontSizeSM:i,fontSize:o,fontSizeLG:a,fontSizeXL:n[3],fontSizeHeading1:n[6],fontSizeHeading2:n[5],fontSizeHeading3:n[4],fontSizeHeading4:n[3],fontSizeHeading5:n[2],lineHeight:s,lineHeightLG:u,lineHeightSM:c,fontHeight:Math.round(s*o),fontHeightLG:Math.round(u*a),fontHeightSM:Math.round(c*i),lineHeightHeading1:r[6],lineHeightHeading2:r[5],lineHeightHeading3:r[4],lineHeightHeading4:r[3],lineHeightHeading5:r[2]}};function LB(e){const{sizeUnit:t,sizeStep:n}=e;return{sizeXXL:t*(n+8),sizeXL:t*(n+4),sizeLG:t*(n+2),sizeMD:t*(n+1),sizeMS:t*n,size:t*n,sizeSM:t*(n-1),sizeXS:t*(n-2),sizeXXS:t*(n-3)}}const No=(e,t)=>new xn(e).setAlpha(t).toRgbString(),Qc=(e,t)=>new xn(e).darken(t).toHexString(),BB=e=>{const t=$s(e);return{1:t[0],2:t[1],3:t[2],4:t[3],5:t[4],6:t[5],7:t[6],8:t[4],9:t[5],10:t[6]}},AB=(e,t)=>{const n=e||"#fff",r=t||"#000";return{colorBgBase:n,colorTextBase:r,colorText:No(r,.88),colorTextSecondary:No(r,.65),colorTextTertiary:No(r,.45),colorTextQuaternary:No(r,.25),colorFill:No(r,.15),colorFillSecondary:No(r,.06),colorFillTertiary:No(r,.04),colorFillQuaternary:No(r,.02),colorBgSolid:No(r,1),colorBgSolidHover:No(r,.75),colorBgSolidActive:No(r,.95),colorBgLayout:Qc(n,4),colorBgContainer:Qc(n,0),colorBgElevated:Qc(n,0),colorBgSpotlight:No(r,.85),colorBgBlur:"transparent",colorBorder:Qc(n,15),colorBorderSecondary:Qc(n,6)}};function md(e){Tl.pink=Tl.magenta,am.pink=am.magenta;const t=Object.keys(Ky).map(n=>{const r=e[n]===Tl[n]?am[n]:$s(e[n]);return new Array(10).fill(1).reduce((o,i,a)=>(o[`${n}-${a+1}`]=r[a],o[`${n}${a+1}`]=r[a],o),{})}).reduce((n,r)=>(n=Object.assign(Object.assign({},n),r),n),{});return Object.assign(Object.assign(Object.assign(Object.assign(Object.assign(Object.assign(Object.assign({},e),t),xI(e,{generateColorPalettes:BB,generateNeutralColorPalettes:AB})),CI(e.fontSize)),LB(e)),SI(e)),DB(e))}const EI=qu(md),Yu={token:ql,override:{override:ql},hashed:!0},qy=ue.createContext(Yu),Qu="ant",Xy="anticon",zB=["outlined","borderless","filled"],HB=(e,t)=>t||(e?`${Qu}-${e}`:Qu),ht=d.createContext({getPrefixCls:HB,iconPrefixCls:Xy}),FB=`-ant-${Date.now()}-${Math.random()}`;function _B(e,t){const n={},r=(a,s)=>{let c=a.clone();return c=(s==null?void 0:s(c))||c,c.toRgbString()},o=(a,s)=>{const c=new xn(a),u=$s(c.toRgbString());n[`${s}-color`]=r(c),n[`${s}-color-disabled`]=u[1],n[`${s}-color-hover`]=u[4],n[`${s}-color-active`]=u[6],n[`${s}-color-outline`]=c.clone().setAlpha(.2).toRgbString(),n[`${s}-color-deprecated-bg`]=u[0],n[`${s}-color-deprecated-border`]=u[2]};if(t.primaryColor){o(t.primaryColor,"primary");const a=new xn(t.primaryColor),s=$s(a.toRgbString());s.forEach((u,p)=>{n[`primary-${p+1}`]=u}),n["primary-color-deprecated-l-35"]=r(a,u=>u.lighten(35)),n["primary-color-deprecated-l-20"]=r(a,u=>u.lighten(20)),n["primary-color-deprecated-t-20"]=r(a,u=>u.tint(20)),n["primary-color-deprecated-t-50"]=r(a,u=>u.tint(50)),n["primary-color-deprecated-f-12"]=r(a,u=>u.setAlpha(u.getAlpha()*.12));const c=new xn(s[0]);n["primary-color-active-deprecated-f-30"]=r(c,u=>u.setAlpha(u.getAlpha()*.3)),n["primary-color-active-deprecated-d-02"]=r(c,u=>u.darken(2))}return t.successColor&&o(t.successColor,"success"),t.warningColor&&o(t.warningColor,"warning"),t.errorColor&&o(t.errorColor,"error"),t.infoColor&&o(t.infoColor,"info"),` - :root { - ${Object.keys(n).map(a=>`--${e}-${a}: ${n[a]};`).join(` -`)} - } - `.trim()}function VB(e,t){const n=_B(e,t);$r()&&ea(n,`${FB}-dynamic-theme`)}const So=d.createContext(!1),Gy=e=>{let{children:t,disabled:n}=e;const r=d.useContext(So);return d.createElement(So.Provider,{value:n??r},t)},Is=d.createContext(void 0),WB=e=>{let{children:t,size:n}=e;const r=d.useContext(Is);return d.createElement(Is.Provider,{value:n||r},t)};function UB(){const e=d.useContext(So),t=d.useContext(Is);return{componentDisabled:e,componentSize:t}}var kI=qn(function e(){Kn(this,e)}),OI="CALC_UNIT",KB=new RegExp(OI,"g");function sm(e){return typeof e=="number"?"".concat(e).concat(OI):e}var qB=function(e){Co(n,e);var t=Eo(n);function n(r,o){var i;Kn(this,n),i=t.call(this),K(Ne(i),"result",""),K(Ne(i),"unitlessCssVar",void 0),K(Ne(i),"lowPriority",void 0);var a=st(r);return i.unitlessCssVar=o,r instanceof n?i.result="(".concat(r.result,")"):a==="number"?i.result=sm(r):a==="string"&&(i.result=r),i}return qn(n,[{key:"add",value:function(o){return o instanceof n?this.result="".concat(this.result," + ").concat(o.getResult()):(typeof o=="number"||typeof o=="string")&&(this.result="".concat(this.result," + ").concat(sm(o))),this.lowPriority=!0,this}},{key:"sub",value:function(o){return o instanceof n?this.result="".concat(this.result," - ").concat(o.getResult()):(typeof o=="number"||typeof o=="string")&&(this.result="".concat(this.result," - ").concat(sm(o))),this.lowPriority=!0,this}},{key:"mul",value:function(o){return this.lowPriority&&(this.result="(".concat(this.result,")")),o instanceof n?this.result="".concat(this.result," * ").concat(o.getResult(!0)):(typeof o=="number"||typeof o=="string")&&(this.result="".concat(this.result," * ").concat(o)),this.lowPriority=!1,this}},{key:"div",value:function(o){return this.lowPriority&&(this.result="(".concat(this.result,")")),o instanceof n?this.result="".concat(this.result," / ").concat(o.getResult(!0)):(typeof o=="number"||typeof o=="string")&&(this.result="".concat(this.result," / ").concat(o)),this.lowPriority=!1,this}},{key:"getResult",value:function(o){return this.lowPriority||o?"(".concat(this.result,")"):this.result}},{key:"equal",value:function(o){var i=this,a=o||{},s=a.unit,c=!0;return typeof s=="boolean"?c=s:Array.from(this.unitlessCssVar).some(function(u){return i.result.includes(u)})&&(c=!1),this.result=this.result.replace(KB,c?"px":""),typeof this.lowPriority<"u"?"calc(".concat(this.result,")"):this.result}}]),n}(kI),XB=function(e){Co(n,e);var t=Eo(n);function n(r){var o;return Kn(this,n),o=t.call(this),K(Ne(o),"result",0),r instanceof n?o.result=r.result:typeof r=="number"&&(o.result=r),o}return qn(n,[{key:"add",value:function(o){return o instanceof n?this.result+=o.result:typeof o=="number"&&(this.result+=o),this}},{key:"sub",value:function(o){return o instanceof n?this.result-=o.result:typeof o=="number"&&(this.result-=o),this}},{key:"mul",value:function(o){return o instanceof n?this.result*=o.result:typeof o=="number"&&(this.result*=o),this}},{key:"div",value:function(o){return o instanceof n?this.result/=o.result:typeof o=="number"&&(this.result/=o),this}},{key:"equal",value:function(){return this.result}}]),n}(kI),GB=function(t,n){var r=t==="css"?qB:XB;return function(o){return new r(o,n)}},HC=function(t,n){return"".concat([n,t.replace(/([A-Z]+)([A-Z][a-z]+)/g,"$1-$2").replace(/([a-z])([A-Z])/g,"$1-$2")].filter(Boolean).join("-"))};function gn(e){var t=d.useRef();t.current=e;var n=d.useCallback(function(){for(var r,o=arguments.length,i=new Array(o),a=0;a1e4){var r=Date.now();this.lastAccessBeat.forEach(function(o,i){r-o>JB&&(n.map.delete(i),n.lastAccessBeat.delete(i))}),this.accessBeat=0}}}]),e}(),WC=new eA;function tA(e,t){return ue.useMemo(function(){var n=WC.get(t);if(n)return n;var r=e();return WC.set(t,r),r},t)}var nA=function(){return{}};function rA(e){var t=e.useCSP,n=t===void 0?nA:t,r=e.useToken,o=e.usePrefix,i=e.getResetStyles,a=e.getCommonStyle,s=e.getCompUnitless;function c(h,m,b,y){var w=Array.isArray(h)?h[0]:h;function C(M){return"".concat(String(w)).concat(M.slice(0,1).toUpperCase()).concat(M.slice(1))}var S=(y==null?void 0:y.unitless)||{},E=typeof s=="function"?s(h):{},k=Z(Z({},E),{},K({},C("zIndexPopup"),!0));Object.keys(S).forEach(function(M){k[C(M)]=S[M]});var O=Z(Z({},y),{},{unitless:k,prefixToken:C}),$=p(h,m,b,O),T=u(w,b,O);return function(M){var P=arguments.length>1&&arguments[1]!==void 0?arguments[1]:M,R=$(M,P),A=ve(R,2),V=A[1],z=T(P),B=ve(z,2),_=B[0],H=B[1];return[_,V,H]}}function u(h,m,b){var y=b.unitless,w=b.injectStyle,C=w===void 0?!0:w,S=b.prefixToken,E=b.ignore,k=function(T){var M=T.rootCls,P=T.cssVar,R=P===void 0?{}:P,A=r(),V=A.realToken;return aB({path:[h],prefix:R.prefix,key:R.key,unitless:y,ignore:E,token:V,scope:M},function(){var z=VC(h,V,m),B=FC(h,V,z,{deprecatedTokens:b==null?void 0:b.deprecatedTokens});return Object.keys(z).forEach(function(_){B[S(_)]=B[_],delete B[_]}),B}),null},O=function(T){var M=r(),P=M.cssVar;return[function(R){return C&&P?ue.createElement(ue.Fragment,null,ue.createElement(k,{rootCls:T,cssVar:P,component:h}),R):R},P==null?void 0:P.key]};return O}function p(h,m,b){var y=arguments.length>3&&arguments[3]!==void 0?arguments[3]:{},w=Array.isArray(h)?h:[h,h],C=ve(w,1),S=C[0],E=w.join("-"),k=e.layer||{name:"antd"};return function(O){var $=arguments.length>1&&arguments[1]!==void 0?arguments[1]:O,T=r(),M=T.theme,P=T.realToken,R=T.hashId,A=T.token,V=T.cssVar,z=o(),B=z.rootPrefixCls,_=z.iconPrefixCls,H=n(),j=V?"css":"js",L=tA(function(){var q=new Set;return V&&Object.keys(y.unitless||{}).forEach(function(J){q.add(Ep(J,V.prefix)),q.add(Ep(J,HC(S,V.prefix)))}),GB(j,q)},[j,S,V==null?void 0:V.prefix]),F=ZB(j),U=F.max,D=F.min,W={theme:M,token:A,hashId:R,nonce:function(){return H.nonce},clientOnly:y.clientOnly,layer:k,order:y.order||-999};H0(Z(Z({},W),{},{clientOnly:!1,path:["Shared",B]}),function(){return typeof i=="function"?i(A):[]});var G=H0(Z(Z({},W),{},{path:[E,O,_]}),function(){if(y.injectStyle===!1)return[];var q=QB(A),J=q.token,Y=q.flush,Q=VC(S,P,b),te=".".concat(O),ce=FC(S,P,Q,{deprecatedTokens:y.deprecatedTokens});V&&Q&&st(Q)==="object"&&Object.keys(Q).forEach(function(ee){Q[ee]="var(".concat(Ep(ee,HC(S,V.prefix)),")")});var se=vn(J,{componentCls:te,prefixCls:O,iconCls:".".concat(_),antCls:".".concat(B),calc:L,max:U,min:D},V?Q:ce),ne=m(se,{hashId:R,prefixCls:O,rootPrefixCls:B,iconPrefixCls:_});Y(S,ce);var ae=typeof a=="function"?a(se,O,$,y.resetFont):null;return[y.resetStyle===!1?null:ae,ne]});return[G,R]}}function v(h,m,b){var y=arguments.length>3&&arguments[3]!==void 0?arguments[3]:{},w=p(h,m,b,Z({resetStyle:!1,order:-998},y)),C=function(E){var k=E.prefixCls,O=E.rootCls,$=O===void 0?k:O;return w(k,$),null};return C}return{genStyleHooks:c,genSubStyleComponent:v,genComponentStyleHook:p}}const Zu=["blue","purple","cyan","green","magenta","pink","red","orange","yellow","volcano","geekblue","lime","gold"],oA="5.21.6";function cm(e){return e>=0&&e<=255}function Uf(e,t){const{r:n,g:r,b:o,a:i}=new xn(e).toRgb();if(i<1)return e;const{r:a,g:s,b:c}=new xn(t).toRgb();for(let u=.01;u<=1;u+=.01){const p=Math.round((n-a*(1-u))/u),v=Math.round((r-s*(1-u))/u),h=Math.round((o-c*(1-u))/u);if(cm(p)&&cm(v)&&cm(h))return new xn({r:p,g:v,b:h,a:Math.round(u*100)/100}).toRgbString()}return new xn({r:n,g:r,b:o,a:1}).toRgbString()}var iA=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{delete r[h]});const o=Object.assign(Object.assign({},n),r),i=480,a=576,s=768,c=992,u=1200,p=1600;if(o.motion===!1){const h="0s";o.motionDurationFast=h,o.motionDurationMid=h,o.motionDurationSlow=h}return Object.assign(Object.assign(Object.assign({},o),{colorFillContent:o.colorFillSecondary,colorFillContentHover:o.colorFill,colorFillAlter:o.colorFillQuaternary,colorBgContainerDisabled:o.colorFillTertiary,colorBorderBg:o.colorBgContainer,colorSplit:Uf(o.colorBorderSecondary,o.colorBgContainer),colorTextPlaceholder:o.colorTextQuaternary,colorTextDisabled:o.colorTextQuaternary,colorTextHeading:o.colorText,colorTextLabel:o.colorTextSecondary,colorTextDescription:o.colorTextTertiary,colorTextLightSolid:o.colorWhite,colorHighlight:o.colorError,colorBgTextHover:o.colorFillSecondary,colorBgTextActive:o.colorFill,colorIcon:o.colorTextTertiary,colorIconHover:o.colorText,colorErrorOutline:Uf(o.colorErrorBg,o.colorBgContainer),colorWarningOutline:Uf(o.colorWarningBg,o.colorBgContainer),fontSizeIcon:o.fontSizeSM,lineWidthFocus:o.lineWidth*3,lineWidth:o.lineWidth,controlOutlineWidth:o.lineWidth*2,controlInteractiveSize:o.controlHeight/2,controlItemBgHover:o.colorFillTertiary,controlItemBgActive:o.colorPrimaryBg,controlItemBgActiveHover:o.colorPrimaryBgHover,controlItemBgActiveDisabled:o.colorFill,controlTmpOutline:o.colorFillQuaternary,controlOutline:Uf(o.colorPrimaryBg,o.colorBgContainer),lineType:o.lineType,borderRadius:o.borderRadius,borderRadiusXS:o.borderRadiusXS,borderRadiusSM:o.borderRadiusSM,borderRadiusLG:o.borderRadiusLG,fontWeightStrong:600,opacityLoading:.65,linkDecoration:"none",linkHoverDecoration:"none",linkFocusDecoration:"none",controlPaddingHorizontal:12,controlPaddingHorizontalSM:8,paddingXXS:o.sizeXXS,paddingXS:o.sizeXS,paddingSM:o.sizeSM,padding:o.size,paddingMD:o.sizeMD,paddingLG:o.sizeLG,paddingXL:o.sizeXL,paddingContentHorizontalLG:o.sizeLG,paddingContentVerticalLG:o.sizeMS,paddingContentHorizontal:o.sizeMS,paddingContentVertical:o.sizeSM,paddingContentHorizontalSM:o.size,paddingContentVerticalSM:o.sizeXS,marginXXS:o.sizeXXS,marginXS:o.sizeXS,marginSM:o.sizeSM,margin:o.size,marginMD:o.sizeMD,marginLG:o.sizeLG,marginXL:o.sizeXL,marginXXL:o.sizeXXL,boxShadow:` - 0 6px 16px 0 rgba(0, 0, 0, 0.08), - 0 3px 6px -4px rgba(0, 0, 0, 0.12), - 0 9px 28px 8px rgba(0, 0, 0, 0.05) - `,boxShadowSecondary:` - 0 6px 16px 0 rgba(0, 0, 0, 0.08), - 0 3px 6px -4px rgba(0, 0, 0, 0.12), - 0 9px 28px 8px rgba(0, 0, 0, 0.05) - `,boxShadowTertiary:` - 0 1px 2px 0 rgba(0, 0, 0, 0.03), - 0 1px 6px -1px rgba(0, 0, 0, 0.02), - 0 2px 4px 0 rgba(0, 0, 0, 0.02) - `,screenXS:i,screenXSMin:i,screenXSMax:a-1,screenSM:a,screenSMMin:a,screenSMMax:s-1,screenMD:s,screenMDMin:s,screenMDMax:c-1,screenLG:c,screenLGMin:c,screenLGMax:u-1,screenXL:u,screenXLMin:u,screenXLMax:p-1,screenXXL:p,screenXXLMin:p,boxShadowPopoverArrow:"2px 2px 5px rgba(0, 0, 0, 0.05)",boxShadowCard:` - 0 1px 2px -2px ${new xn("rgba(0, 0, 0, 0.16)").toRgbString()}, - 0 3px 6px 0 ${new xn("rgba(0, 0, 0, 0.12)").toRgbString()}, - 0 5px 12px 4px ${new xn("rgba(0, 0, 0, 0.09)").toRgbString()} - `,boxShadowDrawerRight:` - -6px 0 16px 0 rgba(0, 0, 0, 0.08), - -3px 0 6px -4px rgba(0, 0, 0, 0.12), - -9px 0 28px 8px rgba(0, 0, 0, 0.05) - `,boxShadowDrawerLeft:` - 6px 0 16px 0 rgba(0, 0, 0, 0.08), - 3px 0 6px -4px rgba(0, 0, 0, 0.12), - 9px 0 28px 8px rgba(0, 0, 0, 0.05) - `,boxShadowDrawerUp:` - 0 6px 16px 0 rgba(0, 0, 0, 0.08), - 0 3px 6px -4px rgba(0, 0, 0, 0.12), - 0 9px 28px 8px rgba(0, 0, 0, 0.05) - `,boxShadowDrawerDown:` - 0 -6px 16px 0 rgba(0, 0, 0, 0.08), - 0 -3px 6px -4px rgba(0, 0, 0, 0.12), - 0 -9px 28px 8px rgba(0, 0, 0, 0.05) - `,boxShadowTabsOverflowLeft:"inset 10px 0 8px -8px rgba(0, 0, 0, 0.08)",boxShadowTabsOverflowRight:"inset -10px 0 8px -8px rgba(0, 0, 0, 0.08)",boxShadowTabsOverflowTop:"inset 0 10px 8px -8px rgba(0, 0, 0, 0.08)",boxShadowTabsOverflowBottom:"inset 0 -10px 8px -8px rgba(0, 0, 0, 0.08)"}),r)}var UC=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{const r=n.getDerivativeToken(e),{override:o}=t,i=UC(t,["override"]);let a=Object.assign(Object.assign({},r),{override:o});return a=Yy(a),i&&Object.entries(i).forEach(s=>{let[c,u]=s;const{theme:p}=u,v=UC(u,["theme"]);let h=v;p&&(h=TI(Object.assign(Object.assign({},a),v),{override:v},p)),a[c]=h}),a};function Ir(){const{token:e,hashed:t,theme:n,override:r,cssVar:o}=ue.useContext(qy),i=`${oA}-${t||""}`,a=n||EI,[s,c,u]=R6(a,[ql,e],{salt:i,override:r,getComputedToken:TI,formatToken:Yy,cssVar:o&&{prefix:o.prefix,key:o.key,unitless:II,ignore:aA,preserve:sA}});return[a,u,t?c:"",s,o]}const Ka={overflow:"hidden",whiteSpace:"nowrap",textOverflow:"ellipsis"},jn=function(e){let t=arguments.length>1&&arguments[1]!==void 0?arguments[1]:!1;return{boxSizing:"border-box",margin:0,padding:0,color:e.colorText,fontSize:e.fontSize,lineHeight:e.lineHeight,listStyle:"none",fontFamily:t?"inherit":e.fontFamily}},Mv=()=>({display:"inline-flex",alignItems:"center",color:"inherit",fontStyle:"normal",lineHeight:0,textAlign:"center",textTransform:"none",verticalAlign:"-0.125em",textRendering:"optimizeLegibility","-webkit-font-smoothing":"antialiased","-moz-osx-font-smoothing":"grayscale","> *":{lineHeight:1},svg:{display:"inline-block"}}),Ps=()=>({"&::before":{display:"table",content:'""'},"&::after":{display:"table",clear:"both",content:'""'}}),lA=e=>({a:{color:e.colorLink,textDecoration:e.linkDecoration,backgroundColor:"transparent",outline:"none",cursor:"pointer",transition:`color ${e.motionDurationSlow}`,"-webkit-text-decoration-skip":"objects","&:hover":{color:e.colorLinkHover},"&:active":{color:e.colorLinkActive},"&:active, &:hover":{textDecoration:e.linkHoverDecoration,outline:0},"&:focus":{textDecoration:e.linkFocusDecoration,outline:0},"&[disabled]":{color:e.colorTextDisabled,cursor:"not-allowed"}}}),cA=(e,t,n,r)=>{const o=`[class^="${t}"], [class*=" ${t}"]`,i=n?`.${n}`:o,a={boxSizing:"border-box","&::before, &::after":{boxSizing:"border-box"}};let s={};return r!==!1&&(s={fontFamily:e.fontFamily,fontSize:e.fontSize}),{[i]:Object.assign(Object.assign(Object.assign({},s),a),{[o]:a})}},qa=e=>({outline:`${de(e.lineWidthFocus)} solid ${e.colorPrimaryBorder}`,outlineOffset:1,transition:"outline-offset 0s, outline 0s"}),Xl=e=>({"&:focus-visible":Object.assign({},qa(e))}),Qy=e=>Object.assign(Object.assign({color:e.colorLink,textDecoration:e.linkDecoration,outline:"none",cursor:"pointer",transition:`all ${e.motionDurationSlow}`,border:0,padding:0,background:"none",userSelect:"none"},Xl(e)),{"&:focus, &:hover":{color:e.colorLinkHover},"&:active":{color:e.colorLinkActive}}),PI=(e,t)=>{const[n,r]=Ir();return H0({theme:n,token:r,hashId:"",path:["ant-design-icons",e],nonce:()=>t==null?void 0:t.nonce,layer:{name:"antd"}},()=>[{[`.${e}`]:Object.assign(Object.assign({},Mv()),{[`.${e} .${e}-icon`]:{display:"block"}})}])},{genStyleHooks:In,genComponentStyleHook:MI,genSubStyleComponent:ic}=rA({usePrefix:()=>{const{getPrefixCls:e,iconPrefixCls:t}=d.useContext(ht);return{rootPrefixCls:e(),iconPrefixCls:t}},useToken:()=>{const[e,t,n,r,o]=Ir();return{theme:e,realToken:t,hashId:n,token:r,cssVar:o}},useCSP:()=>{const{csp:e,iconPrefixCls:t}=d.useContext(ht);return PI(t,e),e??{}},getResetStyles:e=>[{"&":lA(e)}],getCommonStyle:cA,getCompUnitless:()=>II});function NI(e,t){return Zu.reduce((n,r)=>{const o=e[`${r}1`],i=e[`${r}3`],a=e[`${r}6`],s=e[`${r}7`];return Object.assign(Object.assign({},n),t(r,{lightColor:o,lightBorderColor:i,darkColor:a,textColor:s}))},{})}const uA=Object.assign({},Ev),{useId:KC}=uA,dA=()=>"",fA=typeof KC>"u"?dA:KC;function pA(e,t,n){var r;As();const o=e||{},i=o.inherit===!1||!t?Object.assign(Object.assign({},Yu),{hashed:(r=t==null?void 0:t.hashed)!==null&&r!==void 0?r:Yu.hashed,cssVar:t==null?void 0:t.cssVar}):t,a=fA();return Ls(()=>{var s,c;if(!e)return t;const u=Object.assign({},i.components);Object.keys(e.components||{}).forEach(h=>{u[h]=Object.assign(Object.assign({},u[h]),e.components[h])});const p=`css-var-${a.replace(/:/g,"")}`,v=((s=o.cssVar)!==null&&s!==void 0?s:i.cssVar)&&Object.assign(Object.assign(Object.assign({prefix:n==null?void 0:n.prefixCls},typeof i.cssVar=="object"?i.cssVar:{}),typeof o.cssVar=="object"?o.cssVar:{}),{key:typeof o.cssVar=="object"&&((c=o.cssVar)===null||c===void 0?void 0:c.key)||p});return Object.assign(Object.assign(Object.assign({},i),o),{token:Object.assign(Object.assign({},i.token),o.token),components:u,cssVar:v})},[o,i],(s,c)=>s.some((u,p)=>{const v=c[p];return!zi(u,v,!0)}))}var vA=["children"],RI=d.createContext({});function hA(e){var t=e.children,n=Mt(e,vA);return d.createElement(RI.Provider,{value:n},t)}var gA=function(e){Co(n,e);var t=Eo(n);function n(){return Kn(this,n),t.apply(this,arguments)}return qn(n,[{key:"render",value:function(){return this.props.children}}]),n}(d.Component);function mA(e){var t=d.useReducer(function(s){return s+1},0),n=ve(t,2),r=n[1],o=d.useRef(e),i=gn(function(){return o.current}),a=gn(function(s){o.current=typeof s=="function"?s(o.current):s,r()});return[i,a]}var Pa="none",Kf="appear",qf="enter",Xf="leave",qC="none",si="prepare",El="start",kl="active",Zy="end",DI="prepared";function XC(e,t){var n={};return n[e.toLowerCase()]=t.toLowerCase(),n["Webkit".concat(e)]="webkit".concat(t),n["Moz".concat(e)]="moz".concat(t),n["ms".concat(e)]="MS".concat(t),n["O".concat(e)]="o".concat(t.toLowerCase()),n}function bA(e,t){var n={animationend:XC("Animation","AnimationEnd"),transitionend:XC("Transition","TransitionEnd")};return e&&("AnimationEvent"in t||delete n.animationend.animation,"TransitionEvent"in t||delete n.transitionend.transition),n}var yA=bA($r(),typeof window<"u"?window:{}),jI={};if($r()){var wA=document.createElement("div");jI=wA.style}var Gf={};function LI(e){if(Gf[e])return Gf[e];var t=yA[e];if(t)for(var n=Object.keys(t),r=n.length,o=0;o1&&arguments[1]!==void 0?arguments[1]:2;t();var i=bn(function(){o<=1?r({isCanceled:function(){return i!==e.current}}):n(r,o-1)});e.current=i}return d.useEffect(function(){return function(){t()}},[]),[n,t]};var CA=[si,El,kl,Zy],EA=[si,DI],FI=!1,kA=!0;function _I(e){return e===kl||e===Zy}const OA=function(e,t,n){var r=Ts(qC),o=ve(r,2),i=o[0],a=o[1],s=SA(),c=ve(s,2),u=c[0],p=c[1];function v(){a(si,!0)}var h=t?EA:CA;return HI(function(){if(i!==qC&&i!==Zy){var m=h.indexOf(i),b=h[m+1],y=n(i);y===FI?a(b,!0):b&&u(function(w){function C(){w.isCanceled()||a(b,!0)}y===!0?C():Promise.resolve(y).then(C)})}},[e,i]),d.useEffect(function(){return function(){p()}},[]),[v,i]};function $A(e,t,n,r){var o=r.motionEnter,i=o===void 0?!0:o,a=r.motionAppear,s=a===void 0?!0:a,c=r.motionLeave,u=c===void 0?!0:c,p=r.motionDeadline,v=r.motionLeaveImmediately,h=r.onAppearPrepare,m=r.onEnterPrepare,b=r.onLeavePrepare,y=r.onAppearStart,w=r.onEnterStart,C=r.onLeaveStart,S=r.onAppearActive,E=r.onEnterActive,k=r.onLeaveActive,O=r.onAppearEnd,$=r.onEnterEnd,T=r.onLeaveEnd,M=r.onVisibleChanged,P=Ts(),R=ve(P,2),A=R[0],V=R[1],z=mA(Pa),B=ve(z,2),_=B[0],H=B[1],j=Ts(null),L=ve(j,2),F=L[0],U=L[1],D=_(),W=d.useRef(!1),G=d.useRef(null);function q(){return n()}var J=d.useRef(!1);function Y(){H(Pa),U(null,!0)}var Q=gn(function(ye){var Te=_();if(Te!==Pa){var Ae=q();if(!(ye&&!ye.deadline&&ye.target!==Ae)){var me=J.current,Ie;Te===Kf&&me?Ie=O==null?void 0:O(Ae,ye):Te===qf&&me?Ie=$==null?void 0:$(Ae,ye):Te===Xf&&me&&(Ie=T==null?void 0:T(Ae,ye)),me&&Ie!==!1&&Y()}}}),te=xA(Q),ce=ve(te,1),se=ce[0],ne=function(Te){switch(Te){case Kf:return K(K(K({},si,h),El,y),kl,S);case qf:return K(K(K({},si,m),El,w),kl,E);case Xf:return K(K(K({},si,b),El,C),kl,k);default:return{}}},ae=d.useMemo(function(){return ne(D)},[D]),ee=OA(D,!e,function(ye){if(ye===si){var Te=ae[si];return Te?Te(q()):FI}if(pe in ae){var Ae;U(((Ae=ae[pe])===null||Ae===void 0?void 0:Ae.call(ae,q(),null))||null)}return pe===kl&&D!==Pa&&(se(q()),p>0&&(clearTimeout(G.current),G.current=setTimeout(function(){Q({deadline:!0})},p))),pe===DI&&Y(),kA}),re=ve(ee,2),le=re[0],pe=re[1],Oe=_I(pe);J.current=Oe,HI(function(){V(t);var ye=W.current;W.current=!0;var Te;!ye&&t&&s&&(Te=Kf),ye&&t&&i&&(Te=qf),(ye&&!t&&u||!ye&&v&&!t&&u)&&(Te=Xf);var Ae=ne(Te);Te&&(e||Ae[si])?(H(Te),le()):H(Pa)},[t]),d.useEffect(function(){(D===Kf&&!s||D===qf&&!i||D===Xf&&!u)&&H(Pa)},[s,i,u]),d.useEffect(function(){return function(){W.current=!1,clearTimeout(G.current)}},[]);var ge=d.useRef(!1);d.useEffect(function(){A&&(ge.current=!0),A!==void 0&&D===Pa&&((ge.current||A)&&(M==null||M(A)),ge.current=!0)},[A,D]);var Re=F;return ae[si]&&pe===El&&(Re=Z({transition:"none"},Re)),[D,pe,Re,A??t]}function IA(e){var t=e;st(e)==="object"&&(t=e.transitionSupport);function n(o,i){return!!(o.motionName&&t&&i!==!1)}var r=d.forwardRef(function(o,i){var a=o.visible,s=a===void 0?!0:a,c=o.removeOnLeave,u=c===void 0?!0:c,p=o.forceRender,v=o.children,h=o.motionName,m=o.leavedClassName,b=o.eventProps,y=d.useContext(RI),w=y.motion,C=n(o,w),S=d.useRef(),E=d.useRef();function k(){try{return S.current instanceof HTMLElement?S.current:wu(E.current)}catch{return null}}var O=$A(C,s,k,o),$=ve(O,4),T=$[0],M=$[1],P=$[2],R=$[3],A=d.useRef(R);R&&(A.current=!0);var V=d.useCallback(function(F){S.current=F,_u(i,F)},[i]),z,B=Z(Z({},b),{},{visible:s});if(!v)z=null;else if(T===Pa)R?z=v(Z({},B),V):!u&&A.current&&m?z=v(Z(Z({},B),{},{className:m}),V):p||!u&&!m?z=v(Z(Z({},B),{},{style:{display:"none"}}),V):z=null;else{var _;M===si?_="prepare":_I(M)?_="active":M===El&&(_="start");var H=QC(h,"".concat(T,"-").concat(_));z=v(Z(Z({},B),{},{className:ie(QC(h,T),K(K({},H,H&&_),h,typeof h=="string")),style:P}),V)}if(d.isValidElement(z)&&vi(z)){var j=z,L=j.ref;L||(z=d.cloneElement(z,{ref:V}))}return d.createElement(gA,{ref:E},z)});return r.displayName="CSSMotion",r}const Xo=IA(zI);var nb="add",rb="keep",ob="remove",um="removed";function TA(e){var t;return e&&st(e)==="object"&&"key"in e?t=e:t={key:e},Z(Z({},t),{},{key:String(t.key)})}function ib(){var e=arguments.length>0&&arguments[0]!==void 0?arguments[0]:[];return e.map(TA)}function PA(){var e=arguments.length>0&&arguments[0]!==void 0?arguments[0]:[],t=arguments.length>1&&arguments[1]!==void 0?arguments[1]:[],n=[],r=0,o=t.length,i=ib(e),a=ib(t);i.forEach(function(u){for(var p=!1,v=r;v1});return c.forEach(function(u){n=n.filter(function(p){var v=p.key,h=p.status;return v!==u||h!==ob}),n.forEach(function(p){p.key===u&&(p.status=rb)})}),n}var MA=["component","children","onVisibleChanged","onAllRemoved"],NA=["status"],RA=["eventProps","visible","children","motionName","motionAppear","motionEnter","motionLeave","motionLeaveImmediately","motionDeadline","removeOnLeave","leavedClassName","onAppearPrepare","onAppearStart","onAppearActive","onAppearEnd","onEnterStart","onEnterActive","onEnterEnd","onLeaveStart","onLeaveActive","onLeaveEnd"];function DA(e){var t=arguments.length>1&&arguments[1]!==void 0?arguments[1]:Xo,n=function(r){Co(i,r);var o=Eo(i);function i(){var a;Kn(this,i);for(var s=arguments.length,c=new Array(s),u=0;unull;var BA=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);ot.endsWith("Color"))}const FA=e=>{const{prefixCls:t,iconPrefixCls:n,theme:r,holderRender:o}=e;t!==void 0&&(qp=t),n!==void 0&&(WI=n),"holderRender"in e&&(KI=o),r&&(HA(r)?VB(Np(),r):UI=r)},_A=()=>({getPrefixCls:(e,t)=>t||(e?`${Np()}-${e}`:Np()),getIconPrefixCls:zA,getRootPrefixCls:()=>qp||Np(),getTheme:()=>UI,holderRender:KI}),VA=e=>{const{children:t,csp:n,autoInsertSpaceInButton:r,alert:o,anchor:i,form:a,locale:s,componentSize:c,direction:u,space:p,splitter:v,virtual:h,dropdownMatchSelectWidth:m,popupMatchSelectWidth:b,popupOverflow:y,legacyLocale:w,parentContext:C,iconPrefixCls:S,theme:E,componentDisabled:k,segmented:O,statistic:$,spin:T,calendar:M,carousel:P,cascader:R,collapse:A,typography:V,checkbox:z,descriptions:B,divider:_,drawer:H,skeleton:j,steps:L,image:F,layout:U,list:D,mentions:W,modal:G,progress:q,result:J,slider:Y,breadcrumb:Q,menu:te,pagination:ce,input:se,textArea:ne,empty:ae,badge:ee,radio:re,rate:le,switch:pe,transfer:Oe,avatar:ge,message:Re,tag:ye,table:Te,card:Ae,tabs:me,timeline:Ie,timePicker:Le,upload:Be,notification:et,tree:rt,colorPicker:Ze,datePicker:Ve,rangePicker:Ye,flex:Ge,wave:Fe,dropdown:we,warning:ze,tour:Me,floatButtonGroup:Pe,variant:Ke,inputNumber:St,treeSelect:Ft}=e,Lt=d.useCallback((_e,qe)=>{const{prefixCls:ot}=e;if(qe)return qe;const at=ot||C.getPrefixCls("");return _e?`${at}-${_e}`:at},[C.getPrefixCls,e.prefixCls]),Ct=S||C.iconPrefixCls||Xy,Xt=n||C.csp;PI(Ct,Xt);const Pt=pA(E,C.theme,{prefixCls:Lt("")}),Gt={csp:Xt,autoInsertSpaceInButton:r,alert:o,anchor:i,locale:s||w,direction:u,space:p,splitter:v,virtual:h,popupMatchSelectWidth:b??m,popupOverflow:y,getPrefixCls:Lt,iconPrefixCls:Ct,theme:Pt,segmented:O,statistic:$,spin:T,calendar:M,carousel:P,cascader:R,collapse:A,typography:V,checkbox:z,descriptions:B,divider:_,drawer:H,skeleton:j,steps:L,image:F,input:se,textArea:ne,layout:U,list:D,mentions:W,modal:G,progress:q,result:J,slider:Y,breadcrumb:Q,menu:te,pagination:ce,empty:ae,badge:ee,radio:re,rate:le,switch:pe,transfer:Oe,avatar:ge,message:Re,tag:ye,table:Te,card:Ae,tabs:me,timeline:Ie,timePicker:Le,upload:Be,notification:et,tree:rt,colorPicker:Ze,datePicker:Ve,rangePicker:Ye,flex:Ge,wave:Fe,dropdown:we,warning:ze,tour:Me,floatButtonGroup:Pe,variant:Ke,inputNumber:St,treeSelect:Ft},ft=Object.assign({},C);Object.keys(Gt).forEach(_e=>{Gt[_e]!==void 0&&(ft[_e]=Gt[_e])}),AA.forEach(_e=>{const qe=e[_e];qe&&(ft[_e]=qe)}),typeof r<"u"&&(ft.button=Object.assign({autoInsertSpace:r},ft.button));const Je=Ls(()=>ft,ft,(_e,qe)=>{const ot=Object.keys(_e),at=Object.keys(qe);return ot.length!==at.length||ot.some(xt=>_e[xt]!==qe[xt])}),He=d.useMemo(()=>({prefixCls:Ct,csp:Xt}),[Ct,Xt]);let We=d.createElement(d.Fragment,null,d.createElement(LA,{dropdownMatchSelectWidth:m}),t);const Et=d.useMemo(()=>{var _e,qe,ot,at;return Cl(((_e=hi.Form)===null||_e===void 0?void 0:_e.defaultValidateMessages)||{},((ot=(qe=Je.locale)===null||qe===void 0?void 0:qe.Form)===null||ot===void 0?void 0:ot.defaultValidateMessages)||{},((at=Je.form)===null||at===void 0?void 0:at.validateMessages)||{},(a==null?void 0:a.validateMessages)||{})},[Je,a==null?void 0:a.validateMessages]);Object.keys(Et).length>0&&(We=d.createElement(vI.Provider,{value:Et},We)),s&&(We=d.createElement(gB,{locale:s,_ANT_MARK__:hB},We)),(Ct||Xt)&&(We=d.createElement(Wy.Provider,{value:He},We)),c&&(We=d.createElement(WB,{size:c},We)),We=d.createElement(jA,null,We);const wt=d.useMemo(()=>{const _e=Pt||{},{algorithm:qe,token:ot,components:at,cssVar:xt}=_e,_t=BA(_e,["algorithm","token","components","cssVar"]),pt=qe&&(!Array.isArray(qe)||qe.length>0)?qu(qe):EI,dt={};Object.entries(at||{}).forEach(kt=>{let[Kt,ln]=kt;const Yt=Object.assign({},ln);"algorithm"in Yt&&(Yt.algorithm===!0?Yt.theme=pt:(Array.isArray(Yt.algorithm)||typeof Yt.algorithm=="function")&&(Yt.theme=qu(Yt.algorithm)),delete Yt.algorithm),dt[Kt]=Yt});const $t=Object.assign(Object.assign({},ql),ot);return Object.assign(Object.assign({},_t),{theme:pt,token:$t,components:dt,override:Object.assign({override:$t},dt),cssVar:xt})},[Pt]);return E&&(We=d.createElement(qy.Provider,{value:wt},We)),Je.warning&&(We=d.createElement(dB.Provider,{value:Je.warning},We)),k!==void 0&&(We=d.createElement(Gy,{disabled:k},We)),d.createElement(ht.Provider,{value:Je},We)},la=e=>{const t=d.useContext(ht),n=d.useContext(Uy);return d.createElement(VA,Object.assign({parentContext:t,legacyLocale:n},e))};la.ConfigContext=ht;la.SizeContext=Is;la.config=FA;la.useConfig=UB;Object.defineProperty(la,"SizeContext",{get:()=>Is});var WA={icon:{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M512 64C264.6 64 64 264.6 64 512s200.6 448 448 448 448-200.6 448-448S759.4 64 512 64zm193.5 301.7l-210.6 292a31.8 31.8 0 01-51.7 0L318.5 484.9c-3.8-5.3 0-12.7 6.5-12.7h46.9c10.2 0 19.9 4.9 25.9 13.3l71.2 98.8 157.2-218c6-8.3 15.6-13.3 25.9-13.3H699c6.5 0 10.3 7.4 6.5 12.7z"}}]},name:"check-circle",theme:"filled"};function qI(e){var t;return e==null||(t=e.getRootNode)===null||t===void 0?void 0:t.call(e)}function UA(e){return qI(e)instanceof ShadowRoot}function Xp(e){return UA(e)?qI(e):null}function KA(e){return e.replace(/-(.)/g,function(t,n){return n.toUpperCase()})}function qA(e,t){Fn(e,"[@ant-design/icons] ".concat(t))}function ZC(e){return st(e)==="object"&&typeof e.name=="string"&&typeof e.theme=="string"&&(st(e.icon)==="object"||typeof e.icon=="function")}function JC(){var e=arguments.length>0&&arguments[0]!==void 0?arguments[0]:{};return Object.keys(e).reduce(function(t,n){var r=e[n];switch(n){case"class":t.className=r,delete t.class;break;default:delete t[n],t[KA(n)]=r}return t},{})}function ab(e,t,n){return n?ue.createElement(e.tag,Z(Z({key:t},JC(e.attrs)),n),(e.children||[]).map(function(r,o){return ab(r,"".concat(t,"-").concat(e.tag,"-").concat(o))})):ue.createElement(e.tag,Z({key:t},JC(e.attrs)),(e.children||[]).map(function(r,o){return ab(r,"".concat(t,"-").concat(e.tag,"-").concat(o))}))}function XI(e){return $s(e)[0]}function GI(e){return e?Array.isArray(e)?e:[e]:[]}var XA=` -.anticon { - display: inline-flex; - align-items: center; - color: inherit; - font-style: normal; - line-height: 0; - text-align: center; - text-transform: none; - vertical-align: -0.125em; - text-rendering: optimizeLegibility; - -webkit-font-smoothing: antialiased; - -moz-osx-font-smoothing: grayscale; -} - -.anticon > * { - line-height: 1; -} - -.anticon svg { - display: inline-block; -} - -.anticon::before { - display: none; -} - -.anticon .anticon-icon { - display: block; -} - -.anticon[tabindex] { - cursor: pointer; -} - -.anticon-spin::before, -.anticon-spin { - display: inline-block; - -webkit-animation: loadingCircle 1s infinite linear; - animation: loadingCircle 1s infinite linear; -} - -@-webkit-keyframes loadingCircle { - 100% { - -webkit-transform: rotate(360deg); - transform: rotate(360deg); - } -} - -@keyframes loadingCircle { - 100% { - -webkit-transform: rotate(360deg); - transform: rotate(360deg); - } -} -`,GA=function(t){var n=d.useContext(Wy),r=n.csp,o=n.prefixCls,i=XA;o&&(i=i.replace(/anticon/g,o)),d.useEffect(function(){var a=t.current,s=Xp(a);ea(i,"@ant-design-icons",{prepend:!0,csp:r,attachTo:s})},[])},YA=["icon","className","onClick","style","primaryColor","secondaryColor"],Su={primaryColor:"#333",secondaryColor:"#E6E6E6",calculated:!1};function QA(e){var t=e.primaryColor,n=e.secondaryColor;Su.primaryColor=t,Su.secondaryColor=n||XI(t),Su.calculated=!!n}function ZA(){return Z({},Su)}var ac=function(t){var n=t.icon,r=t.className,o=t.onClick,i=t.style,a=t.primaryColor,s=t.secondaryColor,c=Mt(t,YA),u=d.useRef(),p=Su;if(a&&(p={primaryColor:a,secondaryColor:s||XI(a)}),GA(u),qA(ZC(n),"icon should be icon definiton, but got ".concat(n)),!ZC(n))return null;var v=n;return v&&typeof v.icon=="function"&&(v=Z(Z({},v),{},{icon:v.icon(p.primaryColor,p.secondaryColor)})),ab(v.icon,"svg-".concat(v.name),Z(Z({className:r,onClick:o,style:i,"data-icon":v.name,width:"1em",height:"1em",fill:"currentColor","aria-hidden":"true"},c),{},{ref:u}))};ac.displayName="IconReact";ac.getTwoToneColors=ZA;ac.setTwoToneColors=QA;function YI(e){var t=GI(e),n=ve(t,2),r=n[0],o=n[1];return ac.setTwoToneColors({primaryColor:r,secondaryColor:o})}function JA(){var e=ac.getTwoToneColors();return e.calculated?[e.primaryColor,e.secondaryColor]:e.primaryColor}var e5=["className","icon","spin","rotate","tabIndex","onClick","twoToneColor"];YI(Kl.primary);var en=d.forwardRef(function(e,t){var n=e.className,r=e.icon,o=e.spin,i=e.rotate,a=e.tabIndex,s=e.onClick,c=e.twoToneColor,u=Mt(e,e5),p=d.useContext(Wy),v=p.prefixCls,h=v===void 0?"anticon":v,m=p.rootClassName,b=ie(m,h,K(K({},"".concat(h,"-").concat(r.name),!!r.name),"".concat(h,"-spin"),!!o||r.name==="loading"),n),y=a;y===void 0&&s&&(y=-1);var w=i?{msTransform:"rotate(".concat(i,"deg)"),transform:"rotate(".concat(i,"deg)")}:void 0,C=GI(c),S=ve(C,2),E=S[0],k=S[1];return d.createElement("span",$e({role:"img","aria-label":r.name},u,{ref:t,tabIndex:y,onClick:s,className:b}),d.createElement(ac,{icon:r,primaryColor:E,secondaryColor:k,style:w}))});en.displayName="AntdIcon";en.getTwoToneColor=JA;en.setTwoToneColor=YI;var t5=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:WA}))},Jy=d.forwardRef(t5),n5={icon:{tag:"svg",attrs:{"fill-rule":"evenodd",viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M512 64c247.4 0 448 200.6 448 448S759.4 960 512 960 64 759.4 64 512 264.6 64 512 64zm127.98 274.82h-.04l-.08.06L512 466.75 384.14 338.88c-.04-.05-.06-.06-.08-.06a.12.12 0 00-.07 0c-.03 0-.05.01-.09.05l-45.02 45.02a.2.2 0 00-.05.09.12.12 0 000 .07v.02a.27.27 0 00.06.06L466.75 512 338.88 639.86c-.05.04-.06.06-.06.08a.12.12 0 000 .07c0 .03.01.05.05.09l45.02 45.02a.2.2 0 00.09.05.12.12 0 00.07 0c.02 0 .04-.01.08-.05L512 557.25l127.86 127.87c.04.04.06.05.08.05a.12.12 0 00.07 0c.03 0 .05-.01.09-.05l45.02-45.02a.2.2 0 00.05-.09.12.12 0 000-.07v-.02a.27.27 0 00-.05-.06L557.25 512l127.87-127.86c.04-.04.05-.06.05-.08a.12.12 0 000-.07c0-.03-.01-.05-.05-.09l-45.02-45.02a.2.2 0 00-.09-.05.12.12 0 00-.07 0z"}}]},name:"close-circle",theme:"filled"},r5=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:n5}))},bd=d.forwardRef(r5),o5={icon:{tag:"svg",attrs:{"fill-rule":"evenodd",viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M799.86 166.31c.02 0 .04.02.08.06l57.69 57.7c.04.03.05.05.06.08a.12.12 0 010 .06c0 .03-.02.05-.06.09L569.93 512l287.7 287.7c.04.04.05.06.06.09a.12.12 0 010 .07c0 .02-.02.04-.06.08l-57.7 57.69c-.03.04-.05.05-.07.06a.12.12 0 01-.07 0c-.03 0-.05-.02-.09-.06L512 569.93l-287.7 287.7c-.04.04-.06.05-.09.06a.12.12 0 01-.07 0c-.02 0-.04-.02-.08-.06l-57.69-57.7c-.04-.03-.05-.05-.06-.07a.12.12 0 010-.07c0-.03.02-.05.06-.09L454.07 512l-287.7-287.7c-.04-.04-.05-.06-.06-.09a.12.12 0 010-.07c0-.02.02-.04.06-.08l57.7-57.69c.03-.04.05-.05.07-.06a.12.12 0 01.07 0c.03 0 .05.02.09.06L512 454.07l287.7-287.7c.04-.04.06-.05.09-.06a.12.12 0 01.07 0z"}}]},name:"close",theme:"outlined"},i5=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:o5}))},yd=d.forwardRef(i5),a5={icon:{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M512 64C264.6 64 64 264.6 64 512s200.6 448 448 448 448-200.6 448-448S759.4 64 512 64zm-32 232c0-4.4 3.6-8 8-8h48c4.4 0 8 3.6 8 8v272c0 4.4-3.6 8-8 8h-48c-4.4 0-8-3.6-8-8V296zm32 440a48.01 48.01 0 010-96 48.01 48.01 0 010 96z"}}]},name:"exclamation-circle",theme:"filled"},s5=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:a5}))},Nv=d.forwardRef(s5),l5={icon:{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M512 64C264.6 64 64 264.6 64 512s200.6 448 448 448 448-200.6 448-448S759.4 64 512 64zm32 664c0 4.4-3.6 8-8 8h-48c-4.4 0-8-3.6-8-8V456c0-4.4 3.6-8 8-8h48c4.4 0 8 3.6 8 8v272zm-32-344a48.01 48.01 0 010-96 48.01 48.01 0 010 96z"}}]},name:"info-circle",theme:"filled"},c5=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:l5}))},u5=d.forwardRef(c5),d5=`accept acceptCharset accessKey action allowFullScreen allowTransparency - alt async autoComplete autoFocus autoPlay capture cellPadding cellSpacing challenge - charSet checked classID className colSpan cols content contentEditable contextMenu - controls coords crossOrigin data dateTime default defer dir disabled download draggable - encType form formAction formEncType formMethod formNoValidate formTarget frameBorder - headers height hidden high href hrefLang htmlFor httpEquiv icon id inputMode integrity - is keyParams keyType kind label lang list loop low manifest marginHeight marginWidth max maxLength media - mediaGroup method min minLength multiple muted name noValidate nonce open - optimum pattern placeholder poster preload radioGroup readOnly rel required - reversed role rowSpan rows sandbox scope scoped scrolling seamless selected - shape size sizes span spellCheck src srcDoc srcLang srcSet start step style - summary tabIndex target title type useMap value width wmode wrap`,f5=`onCopy onCut onPaste onCompositionEnd onCompositionStart onCompositionUpdate onKeyDown - onKeyPress onKeyUp onFocus onBlur onChange onInput onSubmit onClick onContextMenu onDoubleClick - onDrag onDragEnd onDragEnter onDragExit onDragLeave onDragOver onDragStart onDrop onMouseDown - onMouseEnter onMouseLeave onMouseMove onMouseOut onMouseOver onMouseUp onSelect onTouchCancel - onTouchEnd onTouchMove onTouchStart onScroll onWheel onAbort onCanPlay onCanPlayThrough - onDurationChange onEmptied onEncrypted onEnded onError onLoadedData onLoadedMetadata - onLoadStart onPause onPlay onPlaying onProgress onRateChange onSeeked onSeeking onStalled onSuspend onTimeUpdate onVolumeChange onWaiting onLoad onError`,p5="".concat(d5," ").concat(f5).split(/[\s\n]+/),v5="aria-",h5="data-";function eE(e,t){return e.indexOf(t)===0}function Gr(e){var t=arguments.length>1&&arguments[1]!==void 0?arguments[1]:!1,n;t===!1?n={aria:!0,data:!0,attr:!0}:t===!0?n={aria:!0}:n=Z({},t);var r={};return Object.keys(e).forEach(function(o){(n.aria&&(o==="role"||eE(o,v5))||n.data&&eE(o,h5)||n.attr&&p5.includes(o))&&(r[o]=e[o])}),r}function QI(e){return e&&ue.isValidElement(e)&&e.type===ue.Fragment}const ZI=(e,t,n)=>ue.isValidElement(e)?ue.cloneElement(e,typeof n=="function"?n(e.props||{}):n):t;function Dr(e,t){return ZI(e,e,t)}const tE=e=>typeof e=="object"&&e!=null&&e.nodeType===1,nE=(e,t)=>(!t||e!=="hidden")&&e!=="visible"&&e!=="clip",dm=(e,t)=>{if(e.clientHeight{const o=(i=>{if(!i.ownerDocument||!i.ownerDocument.defaultView)return null;try{return i.ownerDocument.defaultView.frameElement}catch{return null}})(r);return!!o&&(o.clientHeightit||i>e&&a=t&&s>=n?i-e-r:a>t&&sn?a-t+o:0,g5=e=>{const t=e.parentElement;return t??(e.getRootNode().host||null)},rE=(e,t)=>{var n,r,o,i;if(typeof document>"u")return[];const{scrollMode:a,block:s,inline:c,boundary:u,skipOverflowHiddenElements:p}=t,v=typeof u=="function"?u:H=>H!==u;if(!tE(e))throw new TypeError("Invalid target");const h=document.scrollingElement||document.documentElement,m=[];let b=e;for(;tE(b)&&v(b);){if(b=g5(b),b===h){m.push(b);break}b!=null&&b===document.body&&dm(b)&&!dm(document.documentElement)||b!=null&&dm(b,p)&&m.push(b)}const y=(r=(n=window.visualViewport)==null?void 0:n.width)!=null?r:innerWidth,w=(i=(o=window.visualViewport)==null?void 0:o.height)!=null?i:innerHeight,{scrollX:C,scrollY:S}=window,{height:E,width:k,top:O,right:$,bottom:T,left:M}=e.getBoundingClientRect(),{top:P,right:R,bottom:A,left:V}=(H=>{const j=window.getComputedStyle(H);return{top:parseFloat(j.scrollMarginTop)||0,right:parseFloat(j.scrollMarginRight)||0,bottom:parseFloat(j.scrollMarginBottom)||0,left:parseFloat(j.scrollMarginLeft)||0}})(e);let z=s==="start"||s==="nearest"?O-P:s==="end"?T+A:O+E/2-P+A,B=c==="center"?M+k/2-V+R:c==="end"?$+R:M-V;const _=[];for(let H=0;H=0&&M>=0&&T<=w&&$<=y&&O>=U&&T<=W&&M>=G&&$<=D)return _;const q=getComputedStyle(j),J=parseInt(q.borderLeftWidth,10),Y=parseInt(q.borderTopWidth,10),Q=parseInt(q.borderRightWidth,10),te=parseInt(q.borderBottomWidth,10);let ce=0,se=0;const ne="offsetWidth"in j?j.offsetWidth-j.clientWidth-J-Q:0,ae="offsetHeight"in j?j.offsetHeight-j.clientHeight-Y-te:0,ee="offsetWidth"in j?j.offsetWidth===0?0:F/j.offsetWidth:0,re="offsetHeight"in j?j.offsetHeight===0?0:L/j.offsetHeight:0;if(h===j)ce=s==="start"?z:s==="end"?z-w:s==="nearest"?Yf(S,S+w,w,Y,te,S+z,S+z+E,E):z-w/2,se=c==="start"?B:c==="center"?B-y/2:c==="end"?B-y:Yf(C,C+y,y,J,Q,C+B,C+B+k,k),ce=Math.max(0,ce+S),se=Math.max(0,se+C);else{ce=s==="start"?z-U-Y:s==="end"?z-W+te+ae:s==="nearest"?Yf(U,W,L,Y,te+ae,z,z+E,E):z-(U+L/2)+ae/2,se=c==="start"?B-G-J:c==="center"?B-(G+F/2)+ne/2:c==="end"?B-D+Q+ne:Yf(G,D,F,J,Q+ne,B,B+k,k);const{scrollLeft:le,scrollTop:pe}=j;ce=re===0?0:Math.max(0,Math.min(pe+ce/re,j.scrollHeight-L/re+ae)),se=ee===0?0:Math.max(0,Math.min(le+se/ee,j.scrollWidth-F/ee+ne)),z+=pe-ce,B+=le-se}_.push({el:j,top:ce,left:se})}return _},m5=e=>e===!1?{block:"end",inline:"nearest"}:(t=>t===Object(t)&&Object.keys(t).length!==0)(e)?e:{block:"start",inline:"nearest"};function b5(e,t){if(!e.isConnected||!(o=>{let i=o;for(;i&&i.parentNode;){if(i.parentNode===document)return!0;i=i.parentNode instanceof ShadowRoot?i.parentNode.host:i.parentNode}return!1})(e))return;const n=(o=>{const i=window.getComputedStyle(o);return{top:parseFloat(i.scrollMarginTop)||0,right:parseFloat(i.scrollMarginRight)||0,bottom:parseFloat(i.scrollMarginBottom)||0,left:parseFloat(i.scrollMarginLeft)||0}})(e);if((o=>typeof o=="object"&&typeof o.behavior=="function")(t))return t.behavior(rE(e,t));const r=typeof t=="boolean"||t==null?void 0:t.behavior;for(const{el:o,top:i,left:a}of rE(e,m5(t))){const s=i-n.top+n.bottom,c=a-n.left+n.right;o.scroll({top:s,left:c,behavior:r})}}function sb(e){return e!=null&&e===e.window}const y5=e=>{var t,n;if(typeof window>"u")return 0;let r=0;return sb(e)?r=e.pageYOffset:e instanceof Document?r=e.documentElement.scrollTop:(e instanceof HTMLElement||e)&&(r=e.scrollTop),e&&!sb(e)&&typeof r!="number"&&(r=(n=((t=e.ownerDocument)!==null&&t!==void 0?t:e).documentElement)===null||n===void 0?void 0:n.scrollTop),r};function w5(e,t,n,r){const o=n-t;return e/=r/2,e<1?o/2*e*e*e+t:o/2*((e-=2)*e*e+2)+t}function x5(e){let t=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{};const{getContainer:n=()=>window,callback:r,duration:o=450}=t,i=n(),a=y5(i),s=Date.now(),c=()=>{const p=Date.now()-s,v=w5(p>o?o:p,a,e,o);sb(i)?i.scrollTo(window.pageXOffset,v):i instanceof Document||i.constructor.name==="HTMLDocument"?i.documentElement.scrollTop=v:i.scrollTop=v,p{const[,,,,t]=Ir();return t?`${e}-css-var`:""};var De={MAC_ENTER:3,BACKSPACE:8,TAB:9,NUM_CENTER:12,ENTER:13,SHIFT:16,CTRL:17,ALT:18,PAUSE:19,CAPS_LOCK:20,ESC:27,SPACE:32,PAGE_UP:33,PAGE_DOWN:34,END:35,HOME:36,LEFT:37,UP:38,RIGHT:39,DOWN:40,PRINT_SCREEN:44,INSERT:45,DELETE:46,ZERO:48,ONE:49,TWO:50,THREE:51,FOUR:52,FIVE:53,SIX:54,SEVEN:55,EIGHT:56,NINE:57,QUESTION_MARK:63,A:65,B:66,C:67,D:68,E:69,F:70,G:71,H:72,I:73,J:74,K:75,L:76,M:77,N:78,O:79,P:80,Q:81,R:82,S:83,T:84,U:85,V:86,W:87,X:88,Y:89,Z:90,META:91,WIN_KEY_RIGHT:92,CONTEXT_MENU:93,NUM_ZERO:96,NUM_ONE:97,NUM_TWO:98,NUM_THREE:99,NUM_FOUR:100,NUM_FIVE:101,NUM_SIX:102,NUM_SEVEN:103,NUM_EIGHT:104,NUM_NINE:105,NUM_MULTIPLY:106,NUM_PLUS:107,NUM_MINUS:109,NUM_PERIOD:110,NUM_DIVISION:111,F1:112,F2:113,F3:114,F4:115,F5:116,F6:117,F7:118,F8:119,F9:120,F10:121,F11:122,F12:123,NUMLOCK:144,SEMICOLON:186,DASH:189,EQUALS:187,COMMA:188,PERIOD:190,SLASH:191,APOSTROPHE:192,SINGLE_QUOTE:222,OPEN_SQUARE_BRACKET:219,BACKSLASH:220,CLOSE_SQUARE_BRACKET:221,WIN_KEY:224,MAC_FF_META:224,WIN_IME:229,isTextModifyingKeyEvent:function(t){var n=t.keyCode;if(t.altKey&&!t.ctrlKey||t.metaKey||n>=De.F1&&n<=De.F12)return!1;switch(n){case De.ALT:case De.CAPS_LOCK:case De.CONTEXT_MENU:case De.CTRL:case De.DOWN:case De.END:case De.ESC:case De.HOME:case De.INSERT:case De.LEFT:case De.MAC_FF_META:case De.META:case De.NUMLOCK:case De.NUM_CENTER:case De.PAGE_DOWN:case De.PAGE_UP:case De.PAUSE:case De.PRINT_SCREEN:case De.RIGHT:case De.SHIFT:case De.UP:case De.WIN_KEY:case De.WIN_KEY_RIGHT:return!1;default:return!0}},isCharacterKey:function(t){if(t>=De.ZERO&&t<=De.NINE||t>=De.NUM_ZERO&&t<=De.NUM_MULTIPLY||t>=De.A&&t<=De.Z||window.navigator.userAgent.indexOf("WebKit")!==-1&&t===0)return!0;switch(t){case De.SPACE:case De.QUESTION_MARK:case De.NUM_PLUS:case De.NUM_MINUS:case De.NUM_PERIOD:case De.NUM_DIVISION:case De.SEMICOLON:case De.DASH:case De.EQUALS:case De.COMMA:case De.PERIOD:case De.SLASH:case De.APOSTROPHE:case De.SINGLE_QUOTE:case De.OPEN_SQUARE_BRACKET:case De.BACKSLASH:case De.CLOSE_SQUARE_BRACKET:return!0;default:return!1}}},S5={icon:{tag:"svg",attrs:{viewBox:"0 0 1024 1024",focusable:"false"},children:[{tag:"path",attrs:{d:"M988 548c-19.9 0-36-16.1-36-36 0-59.4-11.6-117-34.6-171.3a440.45 440.45 0 00-94.3-139.9 437.71 437.71 0 00-139.9-94.3C629 83.6 571.4 72 512 72c-19.9 0-36-16.1-36-36s16.1-36 36-36c69.1 0 136.2 13.5 199.3 40.3C772.3 66 827 103 874 150c47 47 83.9 101.8 109.7 162.7 26.7 63.1 40.2 130.2 40.2 199.3.1 19.9-16 36-35.9 36z"}}]},name:"loading",theme:"outlined"},C5=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:S5}))},Xa=d.forwardRef(C5);const Rv=ue.createContext(void 0),Ma=100,E5=10,k5=Ma*E5,JI={Modal:Ma,Drawer:Ma,Popover:Ma,Popconfirm:Ma,Tooltip:Ma,Tour:Ma,FloatButton:Ma},O5={SelectLike:50,Dropdown:50,DatePicker:50,Menu:50,ImagePreview:1};function $5(e){return e in JI}const sc=(e,t)=>{const[,n]=Ir(),r=ue.useContext(Rv),o=$5(e);let i;if(t!==void 0)i=[t,t];else{let a=r??0;o?a+=(r?0:n.zIndexPopupBase)+JI[e]:a+=O5[e],i=[r===void 0?t:a,a]}return i};function I5(){const[e,t]=d.useState([]),n=d.useCallback(r=>(t(o=>[].concat(Se(o),[r])),()=>{t(o=>o.filter(i=>i!==r))}),[]);return[e,n]}function $n(){$n=function(){return t};var e,t={},n=Object.prototype,r=n.hasOwnProperty,o=Object.defineProperty||function(H,j,L){H[j]=L.value},i=typeof Symbol=="function"?Symbol:{},a=i.iterator||"@@iterator",s=i.asyncIterator||"@@asyncIterator",c=i.toStringTag||"@@toStringTag";function u(H,j,L){return Object.defineProperty(H,j,{value:L,enumerable:!0,configurable:!0,writable:!0}),H[j]}try{u({},"")}catch{u=function(L,F,U){return L[F]=U}}function p(H,j,L,F){var U=j&&j.prototype instanceof C?j:C,D=Object.create(U.prototype),W=new B(F||[]);return o(D,"_invoke",{value:R(H,L,W)}),D}function v(H,j,L){try{return{type:"normal",arg:H.call(j,L)}}catch(F){return{type:"throw",arg:F}}}t.wrap=p;var h="suspendedStart",m="suspendedYield",b="executing",y="completed",w={};function C(){}function S(){}function E(){}var k={};u(k,a,function(){return this});var O=Object.getPrototypeOf,$=O&&O(O(_([])));$&&$!==n&&r.call($,a)&&(k=$);var T=E.prototype=C.prototype=Object.create(k);function M(H){["next","throw","return"].forEach(function(j){u(H,j,function(L){return this._invoke(j,L)})})}function P(H,j){function L(U,D,W,G){var q=v(H[U],H,D);if(q.type!=="throw"){var J=q.arg,Y=J.value;return Y&&st(Y)=="object"&&r.call(Y,"__await")?j.resolve(Y.__await).then(function(Q){L("next",Q,W,G)},function(Q){L("throw",Q,W,G)}):j.resolve(Y).then(function(Q){J.value=Q,W(J)},function(Q){return L("throw",Q,W,G)})}G(q.arg)}var F;o(this,"_invoke",{value:function(D,W){function G(){return new j(function(q,J){L(D,W,q,J)})}return F=F?F.then(G,G):G()}})}function R(H,j,L){var F=h;return function(U,D){if(F===b)throw Error("Generator is already running");if(F===y){if(U==="throw")throw D;return{value:e,done:!0}}for(L.method=U,L.arg=D;;){var W=L.delegate;if(W){var G=A(W,L);if(G){if(G===w)continue;return G}}if(L.method==="next")L.sent=L._sent=L.arg;else if(L.method==="throw"){if(F===h)throw F=y,L.arg;L.dispatchException(L.arg)}else L.method==="return"&&L.abrupt("return",L.arg);F=b;var q=v(H,j,L);if(q.type==="normal"){if(F=L.done?y:m,q.arg===w)continue;return{value:q.arg,done:L.done}}q.type==="throw"&&(F=y,L.method="throw",L.arg=q.arg)}}}function A(H,j){var L=j.method,F=H.iterator[L];if(F===e)return j.delegate=null,L==="throw"&&H.iterator.return&&(j.method="return",j.arg=e,A(H,j),j.method==="throw")||L!=="return"&&(j.method="throw",j.arg=new TypeError("The iterator does not provide a '"+L+"' method")),w;var U=v(F,H.iterator,j.arg);if(U.type==="throw")return j.method="throw",j.arg=U.arg,j.delegate=null,w;var D=U.arg;return D?D.done?(j[H.resultName]=D.value,j.next=H.nextLoc,j.method!=="return"&&(j.method="next",j.arg=e),j.delegate=null,w):D:(j.method="throw",j.arg=new TypeError("iterator result is not an object"),j.delegate=null,w)}function V(H){var j={tryLoc:H[0]};1 in H&&(j.catchLoc=H[1]),2 in H&&(j.finallyLoc=H[2],j.afterLoc=H[3]),this.tryEntries.push(j)}function z(H){var j=H.completion||{};j.type="normal",delete j.arg,H.completion=j}function B(H){this.tryEntries=[{tryLoc:"root"}],H.forEach(V,this),this.reset(!0)}function _(H){if(H||H===""){var j=H[a];if(j)return j.call(H);if(typeof H.next=="function")return H;if(!isNaN(H.length)){var L=-1,F=function U(){for(;++L=0;--U){var D=this.tryEntries[U],W=D.completion;if(D.tryLoc==="root")return F("end");if(D.tryLoc<=this.prev){var G=r.call(D,"catchLoc"),q=r.call(D,"finallyLoc");if(G&&q){if(this.prev=0;--F){var U=this.tryEntries[F];if(U.tryLoc<=this.prev&&r.call(U,"finallyLoc")&&this.prev=0;--L){var F=this.tryEntries[L];if(F.finallyLoc===j)return this.complete(F.completion,F.afterLoc),z(F),w}},catch:function(j){for(var L=this.tryEntries.length-1;L>=0;--L){var F=this.tryEntries[L];if(F.tryLoc===j){var U=F.completion;if(U.type==="throw"){var D=U.arg;z(F)}return D}}throw Error("illegal catch attempt")},delegateYield:function(j,L,F){return this.delegate={iterator:_(j),resultName:L,nextLoc:F},this.method==="next"&&(this.arg=e),w}},t}function oE(e,t,n,r,o,i,a){try{var s=e[i](a),c=s.value}catch(u){return void n(u)}s.done?t(c):Promise.resolve(c).then(r,o)}function yo(e){return function(){var t=this,n=arguments;return new Promise(function(r,o){var i=e.apply(t,n);function a(c){oE(i,r,o,a,s,"next",c)}function s(c){oE(i,r,o,a,s,"throw",c)}a(void 0)})}}var wd=Z({},xL),T5=wd.version,P5=wd.render,M5=wd.unmountComponentAtNode,Dv;try{var N5=Number((T5||"").split(".")[0]);N5>=18&&(Dv=wd.createRoot)}catch{}function iE(e){var t=wd.__SECRET_INTERNALS_DO_NOT_USE_OR_YOU_WILL_BE_FIRED;t&&st(t)==="object"&&(t.usingClientEntryPoint=e)}var Gp="__rc_react_root__";function R5(e,t){iE(!0);var n=t[Gp]||Dv(t);iE(!1),n.render(e),t[Gp]=n}function D5(e,t){P5(e,t)}function eT(e,t){if(Dv){R5(e,t);return}D5(e,t)}function j5(e){return lb.apply(this,arguments)}function lb(){return lb=yo($n().mark(function e(t){return $n().wrap(function(r){for(;;)switch(r.prev=r.next){case 0:return r.abrupt("return",Promise.resolve().then(function(){var o;(o=t[Gp])===null||o===void 0||o.unmount(),delete t[Gp]}));case 1:case"end":return r.stop()}},e)})),lb.apply(this,arguments)}function L5(e){M5(e)}function tT(e){return cb.apply(this,arguments)}function cb(){return cb=yo($n().mark(function e(t){return $n().wrap(function(r){for(;;)switch(r.prev=r.next){case 0:if(Dv===void 0){r.next=2;break}return r.abrupt("return",j5(t));case 2:L5(t);case 3:case"end":return r.stop()}},e)})),cb.apply(this,arguments)}const fm=()=>({height:0,opacity:0}),aE=e=>{const{scrollHeight:t}=e;return{height:t,opacity:1}},B5=e=>({height:e?e.offsetHeight:0}),pm=(e,t)=>(t==null?void 0:t.deadline)===!0||t.propertyName==="height",Ju=function(){return{motionName:`${arguments.length>0&&arguments[0]!==void 0?arguments[0]:Qu}-motion-collapse`,onAppearStart:fm,onEnterStart:fm,onAppearActive:aE,onEnterActive:aE,onLeaveStart:B5,onLeaveActive:fm,onAppearEnd:pm,onEnterEnd:pm,onLeaveEnd:pm,motionDeadline:500}},ra=(e,t,n)=>n!==void 0?n:`${e}-${t}`,xd=function(e){if(!e)return!1;if(e instanceof Element){if(e.offsetParent)return!0;if(e.getBBox){var t=e.getBBox(),n=t.width,r=t.height;if(n||r)return!0}if(e.getBoundingClientRect){var o=e.getBoundingClientRect(),i=o.width,a=o.height;if(i||a)return!0}}return!1},A5=e=>{const{componentCls:t,colorPrimary:n}=e;return{[t]:{position:"absolute",background:"transparent",pointerEvents:"none",boxSizing:"border-box",color:`var(--wave-color, ${n})`,boxShadow:"0 0 0 0 currentcolor",opacity:.2,"&.wave-motion-appear":{transition:[`box-shadow 0.4s ${e.motionEaseOutCirc}`,`opacity 2s ${e.motionEaseOutCirc}`].join(","),"&-active":{boxShadow:"0 0 0 6px currentcolor",opacity:0},"&.wave-quick":{transition:[`box-shadow ${e.motionDurationSlow} ${e.motionEaseInOut}`,`opacity ${e.motionDurationSlow} ${e.motionEaseInOut}`].join(",")}}}}},z5=MI("Wave",e=>[A5(e)]),jv=`${Qu}-wave-target`;function vm(e){return e&&e!=="#fff"&&e!=="#ffffff"&&e!=="rgb(255, 255, 255)"&&e!=="rgba(255, 255, 255, 1)"&&!/rgba\((?:\d*, ){3}0\)/.test(e)&&e!=="transparent"}function H5(e){const{borderTopColor:t,borderColor:n,backgroundColor:r}=getComputedStyle(e);return vm(t)?t:vm(n)?n:vm(r)?r:null}function hm(e){return Number.isNaN(e)?0:e}const F5=e=>{const{className:t,target:n,component:r}=e,o=d.useRef(null),[i,a]=d.useState(null),[s,c]=d.useState([]),[u,p]=d.useState(0),[v,h]=d.useState(0),[m,b]=d.useState(0),[y,w]=d.useState(0),[C,S]=d.useState(!1),E={left:u,top:v,width:m,height:y,borderRadius:s.map($=>`${$}px`).join(" ")};i&&(E["--wave-color"]=i);function k(){const $=getComputedStyle(n);a(H5(n));const T=$.position==="static",{borderLeftWidth:M,borderTopWidth:P}=$;p(T?n.offsetLeft:hm(-parseFloat(M))),h(T?n.offsetTop:hm(-parseFloat(P))),b(n.offsetWidth),w(n.offsetHeight);const{borderTopLeftRadius:R,borderTopRightRadius:A,borderBottomLeftRadius:V,borderBottomRightRadius:z}=$;c([R,A,z,V].map(B=>hm(parseFloat(B))))}if(d.useEffect(()=>{if(n){const $=bn(()=>{k(),S(!0)});let T;return typeof ResizeObserver<"u"&&(T=new ResizeObserver(k),T.observe(n)),()=>{bn.cancel($),T==null||T.disconnect()}}},[]),!C)return null;const O=(r==="Checkbox"||r==="Radio")&&(n==null?void 0:n.classList.contains(jv));return d.createElement(Xo,{visible:!0,motionAppear:!0,motionName:"wave-motion",motionDeadline:5e3,onAppearEnd:($,T)=>{var M;if(T.deadline||T.propertyName==="opacity"){const P=(M=o.current)===null||M===void 0?void 0:M.parentElement;tT(P).then(()=>{P==null||P.remove()})}return!1}},($,T)=>{let{className:M}=$;return d.createElement("div",{ref:Wr(o,T),className:ie(t,M,{"wave-quick":O}),style:E})})},_5=(e,t)=>{var n;const{component:r}=t;if(r==="Checkbox"&&!(!((n=e.querySelector("input"))===null||n===void 0)&&n.checked))return;const o=document.createElement("div");o.style.position="absolute",o.style.left="0px",o.style.top="0px",e==null||e.insertBefore(o,e==null?void 0:e.firstChild),eT(d.createElement(F5,Object.assign({},t,{target:e})),o)},V5=(e,t,n)=>{const{wave:r}=d.useContext(ht),[,o,i]=Ir(),a=gn(u=>{const p=e.current;if(r!=null&&r.disabled||!p)return;const v=p.querySelector(`.${jv}`)||p,{showEffect:h}=r||{};(h||_5)(v,{className:t,token:o,component:n,event:u,hashId:i})}),s=d.useRef();return u=>{bn.cancel(s.current),s.current=bn(()=>{a(u)})}},Lv=e=>{const{children:t,disabled:n,component:r}=e,{getPrefixCls:o}=d.useContext(ht),i=d.useRef(null),a=o("wave"),[,s]=z5(a),c=V5(i,ie(a,s),r);if(ue.useEffect(()=>{const p=i.current;if(!p||p.nodeType!==1||n)return;const v=h=>{!xd(h.target)||!p.getAttribute||p.getAttribute("disabled")||p.disabled||p.className.includes("disabled")||p.className.includes("-leave")||c(h)};return p.addEventListener("click",v,!0),()=>{p.removeEventListener("click",v,!0)}},[n]),!ue.isValidElement(t))return t??null;const u=vi(t)?Wr(t.ref,i):i;return Dr(t,{ref:u})},Go=e=>{const t=ue.useContext(Is);return ue.useMemo(()=>e?typeof e=="string"?e??t:e instanceof Function?e(t):t:t,[e,t])},W5=e=>{const{componentCls:t}=e;return{[t]:{"&-block":{display:"flex",width:"100%"},"&-vertical":{flexDirection:"column"}}}},U5=e=>{const{componentCls:t,antCls:n}=e;return{[t]:{display:"inline-flex","&-rtl":{direction:"rtl"},"&-vertical":{flexDirection:"column"},"&-align":{flexDirection:"column","&-center":{alignItems:"center"},"&-start":{alignItems:"flex-start"},"&-end":{alignItems:"flex-end"},"&-baseline":{alignItems:"baseline"}},[`${t}-item:empty`]:{display:"none"},[`${t}-item > ${n}-badge-not-a-wrapper:only-child`]:{display:"block"}}}},K5=e=>{const{componentCls:t}=e;return{[t]:{"&-gap-row-small":{rowGap:e.spaceGapSmallSize},"&-gap-row-middle":{rowGap:e.spaceGapMiddleSize},"&-gap-row-large":{rowGap:e.spaceGapLargeSize},"&-gap-col-small":{columnGap:e.spaceGapSmallSize},"&-gap-col-middle":{columnGap:e.spaceGapMiddleSize},"&-gap-col-large":{columnGap:e.spaceGapLargeSize}}}},nT=In("Space",e=>{const t=vn(e,{spaceGapSmallSize:e.paddingXS,spaceGapMiddleSize:e.padding,spaceGapLargeSize:e.paddingLG});return[U5(t),K5(t),W5(t)]},()=>({}),{resetStyle:!1});var rT=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{const n=d.useContext(Bv),r=d.useMemo(()=>{if(!n)return"";const{compactDirection:o,isFirstItem:i,isLastItem:a}=n,s=o==="vertical"?"-vertical-":"-";return ie(`${e}-compact${s}item`,{[`${e}-compact${s}first-item`]:i,[`${e}-compact${s}last-item`]:a,[`${e}-compact${s}item-rtl`]:t==="rtl"})},[e,t,n]);return{compactSize:n==null?void 0:n.compactSize,compactDirection:n==null?void 0:n.compactDirection,compactItemClassnames:r}},q5=e=>{let{children:t}=e;return d.createElement(Bv.Provider,{value:null},t)},X5=e=>{var{children:t}=e,n=rT(e,["children"]);return d.createElement(Bv.Provider,{value:n},t)},G5=e=>{const{getPrefixCls:t,direction:n}=d.useContext(ht),{size:r,direction:o,block:i,prefixCls:a,className:s,rootClassName:c,children:u}=e,p=rT(e,["size","direction","block","prefixCls","className","rootClassName","children"]),v=Go(E=>r??E),h=t("space-compact",a),[m,b]=nT(h),y=ie(h,b,{[`${h}-rtl`]:n==="rtl",[`${h}-block`]:i,[`${h}-vertical`]:o==="vertical"},s,c),w=d.useContext(Bv),C=lo(u),S=d.useMemo(()=>C.map((E,k)=>{const O=(E==null?void 0:E.key)||`${h}-item-${k}`;return d.createElement(X5,{key:O,compactSize:v,compactDirection:o,isFirstItem:k===0&&(!w||(w==null?void 0:w.isFirstItem)),isLastItem:k===C.length-1&&(!w||(w==null?void 0:w.isLastItem))},E)}),[r,C,w]);return C.length===0?null:m(d.createElement("div",Object.assign({className:y},p),S))};var Y5=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{const{getPrefixCls:t,direction:n}=d.useContext(ht),{prefixCls:r,size:o,className:i}=e,a=Y5(e,["prefixCls","size","className"]),s=t("btn-group",r),[,,c]=Ir();let u="";switch(o){case"large":u="lg";break;case"small":u="sm";break}const p=ie(s,{[`${s}-${u}`]:u,[`${s}-rtl`]:n==="rtl"},i,c);return d.createElement(oT.Provider,{value:o},d.createElement("div",Object.assign({},a,{className:p})))},sE=/^[\u4E00-\u9FA5]{2}$/,ub=sE.test.bind(sE);function ew(e){return e==="danger"?{danger:!0}:{type:e}}function lE(e){return typeof e=="string"}function gm(e){return e==="text"||e==="link"}function Z5(e,t){if(e==null)return;const n=t?" ":"";return typeof e!="string"&&typeof e!="number"&&lE(e.type)&&ub(e.props.children)?Dr(e,{children:e.props.children.split("").join(n)}):lE(e)?ub(e)?ue.createElement("span",null,e.split("").join(n)):ue.createElement("span",null,e):QI(e)?ue.createElement("span",null,e):e}function J5(e,t){let n=!1;const r=[];return ue.Children.forEach(e,o=>{const i=typeof o,a=i==="string"||i==="number";if(n&&a){const s=r.length-1,c=r[s];r[s]=`${c}${o}`}else r.push(o);n=a}),ue.Children.map(r,o=>Z5(o,t))}const iT=d.forwardRef((e,t)=>{const{className:n,style:r,children:o,prefixCls:i}=e,a=ie(`${i}-icon`,n);return ue.createElement("span",{ref:t,className:a,style:r},o)}),cE=d.forwardRef((e,t)=>{const{prefixCls:n,className:r,style:o,iconClassName:i}=e,a=ie(`${n}-loading-icon`,r);return ue.createElement(iT,{prefixCls:n,className:a,style:o,ref:t},ue.createElement(Xa,{className:i}))}),mm=()=>({width:0,opacity:0,transform:"scale(0)"}),bm=e=>({width:e.scrollWidth,opacity:1,transform:"scale(1)"}),ez=e=>{const{prefixCls:t,loading:n,existIcon:r,className:o,style:i}=e,a=!!n;return r?ue.createElement(cE,{prefixCls:t,className:o,style:i}):ue.createElement(Xo,{visible:a,motionName:`${t}-loading-icon-motion`,motionLeave:a,removeOnLeave:!0,onAppearStart:mm,onAppearActive:bm,onEnterStart:mm,onEnterActive:bm,onLeaveStart:bm,onLeaveActive:mm},(s,c)=>{let{className:u,style:p}=s;return ue.createElement(cE,{prefixCls:t,className:o,style:Object.assign(Object.assign({},i),p),ref:c,iconClassName:u})})},uE=(e,t)=>({[`> span, > ${e}`]:{"&:not(:last-child)":{[`&, & > ${e}`]:{"&:not(:disabled)":{borderInlineEndColor:t}}},"&:not(:first-child)":{[`&, & > ${e}`]:{"&:not(:disabled)":{borderInlineStartColor:t}}}}}),tz=e=>{const{componentCls:t,fontSize:n,lineWidth:r,groupBorderColor:o,colorErrorHover:i}=e;return{[`${t}-group`]:[{position:"relative",display:"inline-flex",[`> span, > ${t}`]:{"&:not(:last-child)":{[`&, & > ${t}`]:{borderStartEndRadius:0,borderEndEndRadius:0}},"&:not(:first-child)":{marginInlineStart:e.calc(r).mul(-1).equal(),[`&, & > ${t}`]:{borderStartStartRadius:0,borderEndStartRadius:0}}},[t]:{position:"relative",zIndex:1,"&:hover, &:focus, &:active":{zIndex:2},"&[disabled]":{zIndex:0}},[`${t}-icon-only`]:{fontSize:n}},uE(`${t}-primary`,o),uE(`${t}-danger`,i)]}},kr=Math.round;function ym(e,t){const n=e.replace(/^[^(]*\((.*)/,"$1").replace(/\).*/,"").match(/\d*\.?\d+%?/g)||[],r=n.map(o=>parseFloat(o));for(let o=0;o<3;o+=1)r[o]=t(r[o]||0,n[o]||"",o);return n[3]?r[3]=n[3].includes("%")?r[3]/100:r[3]:r[3]=1,r}const dE=(e,t,n)=>n===0?e:e/100;function Zc(e,t){const n=t||255;return e>n?n:e<0?0:e}class Av{constructor(t){K(this,"isValid",!0),K(this,"r",0),K(this,"g",0),K(this,"b",0),K(this,"a",1),K(this,"_h",void 0),K(this,"_s",void 0),K(this,"_l",void 0),K(this,"_v",void 0),K(this,"_max",void 0),K(this,"_min",void 0),K(this,"_brightness",void 0);function n(r){return r[0]in t&&r[1]in t&&r[2]in t}if(t)if(typeof t=="string"){let o=function(i){return r.startsWith(i)};const r=t.trim();/^#?[A-F\d]{3,8}$/i.test(r)?this.fromHexString(r):o("rgb")?this.fromRgbString(r):o("hsl")?this.fromHslString(r):(o("hsv")||o("hsb"))&&this.fromHsvString(r)}else if(t instanceof Av)this.r=t.r,this.g=t.g,this.b=t.b,this.a=t.a,this._h=t._h,this._s=t._s,this._l=t._l,this._v=t._v;else if(n("rgb"))this.r=Zc(t.r),this.g=Zc(t.g),this.b=Zc(t.b),this.a=typeof t.a=="number"?Zc(t.a,1):1;else if(n("hsl"))this.fromHsl(t);else if(n("hsv"))this.fromHsv(t);else throw new Error("@ant-design/fast-color: unsupported input "+JSON.stringify(t))}setR(t){return this._sc("r",t)}setG(t){return this._sc("g",t)}setB(t){return this._sc("b",t)}setA(t){return this._sc("a",t,1)}setHue(t){const n=this.toHsv();return n.h=t,this._c(n)}getLuminance(){function t(i){const a=i/255;return a<=.03928?a/12.92:Math.pow((a+.055)/1.055,2.4)}const n=t(this.r),r=t(this.g),o=t(this.b);return .2126*n+.7152*r+.0722*o}getHue(){if(typeof this._h>"u"){const t=this.getMax()-this.getMin();t===0?this._h=0:this._h=kr(60*(this.r===this.getMax()?(this.g-this.b)/t+(this.g"u"){const t=this.getMax()-this.getMin();t===0?this._s=0:this._s=t/this.getMax()}return this._s}getLightness(){return typeof this._l>"u"&&(this._l=(this.getMax()+this.getMin())/510),this._l}getValue(){return typeof this._v>"u"&&(this._v=this.getMax()/255),this._v}getBrightness(){return typeof this._brightness>"u"&&(this._brightness=(this.r*299+this.g*587+this.b*114)/1e3),this._brightness}darken(t=10){const n=this.getHue(),r=this.getSaturation();let o=this.getLightness()-t/100;return o<0&&(o=0),this._c({h:n,s:r,l:o,a:this.a})}lighten(t=10){const n=this.getHue(),r=this.getSaturation();let o=this.getLightness()+t/100;return o>1&&(o=1),this._c({h:n,s:r,l:o,a:this.a})}mix(t,n=50){const r=this._c(t),o=n/100,i=s=>(r[s]-this[s])*o+this[s],a={r:kr(i("r")),g:kr(i("g")),b:kr(i("b")),a:kr(i("a")*100)/100};return this._c(a)}tint(t=10){return this.mix({r:255,g:255,b:255,a:1},t)}shade(t=10){return this.mix({r:0,g:0,b:0,a:1},t)}onBackground(t){const n=this._c(t),r=this.a+n.a*(1-this.a),o=i=>kr((this[i]*this.a+n[i]*n.a*(1-this.a))/r);return this._c({r:o("r"),g:o("g"),b:o("b"),a:r})}isDark(){return this.getBrightness()<128}isLight(){return this.getBrightness()>=128}equals(t){return this.r===t.r&&this.g===t.g&&this.b===t.b&&this.a===t.a}clone(){return this._c(this)}toHexString(){let t="#";const n=(this.r||0).toString(16);t+=n.length===2?n:"0"+n;const r=(this.g||0).toString(16);t+=r.length===2?r:"0"+r;const o=(this.b||0).toString(16);if(t+=o.length===2?o:"0"+o,typeof this.a=="number"&&this.a>=0&&this.a<1){const i=kr(this.a*255).toString(16);t+=i.length===2?i:"0"+i}return t}toHsl(){return{h:this.getHue(),s:this.getSaturation(),l:this.getLightness(),a:this.a}}toHslString(){const t=this.getHue(),n=kr(this.getSaturation()*100),r=kr(this.getLightness()*100);return this.a!==1?`hsla(${t},${n}%,${r}%,${this.a})`:`hsl(${t},${n}%,${r}%)`}toHsv(){return{h:this.getHue(),s:this.getSaturation(),v:this.getValue(),a:this.a}}toRgb(){return{r:this.r,g:this.g,b:this.b,a:this.a}}toRgbString(){return this.a!==1?`rgba(${this.r},${this.g},${this.b},${this.a})`:`rgb(${this.r},${this.g},${this.b})`}toString(){return this.toRgbString()}_sc(t,n,r){const o=this.clone();return o[t]=Zc(n,r),o}_c(t){return new this.constructor(t)}getMax(){return typeof this._max>"u"&&(this._max=Math.max(this.r,this.g,this.b)),this._max}getMin(){return typeof this._min>"u"&&(this._min=Math.min(this.r,this.g,this.b)),this._min}fromHexString(t){const n=t.replace("#","");function r(o,i){return parseInt(n[o]+n[i||o],16)}n.length<6?(this.r=r(0),this.g=r(1),this.b=r(2),this.a=n[3]?r(3)/255:1):(this.r=r(0,1),this.g=r(2,3),this.b=r(4,5),this.a=n[6]?r(6,7)/255:1)}fromHsl({h:t,s:n,l:r,a:o}){if(this._h=t%360,this._s=n,this._l=r,this.a=typeof o=="number"?o:1,n<=0){const h=kr(r*255);this.r=h,this.g=h,this.b=h}let i=0,a=0,s=0;const c=t/60,u=(1-Math.abs(2*r-1))*n,p=u*(1-Math.abs(c%2-1));c>=0&&c<1?(i=u,a=p):c>=1&&c<2?(i=p,a=u):c>=2&&c<3?(a=u,s=p):c>=3&&c<4?(a=p,s=u):c>=4&&c<5?(i=p,s=u):c>=5&&c<6&&(i=u,s=p);const v=r-u/2;this.r=kr((i+v)*255),this.g=kr((a+v)*255),this.b=kr((s+v)*255)}fromHsv({h:t,s:n,v:r,a:o}){this._h=t%360,this._s=n,this._v=r,this.a=typeof o=="number"?o:1;const i=kr(r*255);if(this.r=i,this.g=i,this.b=i,n<=0)return;const a=t/60,s=Math.floor(a),c=a-s,u=kr(r*(1-n)*255),p=kr(r*(1-n*c)*255),v=kr(r*(1-n*(1-c))*255);switch(s){case 0:this.g=v,this.b=u;break;case 1:this.r=p,this.b=u;break;case 2:this.r=u,this.b=v;break;case 3:this.r=u,this.g=p;break;case 4:this.r=v,this.g=u;break;case 5:default:this.g=u,this.b=p;break}}fromHsvString(t){const n=ym(t,dE);this.fromHsv({h:n[0],s:n[1],v:n[2],a:n[3]})}fromHslString(t){const n=ym(t,dE);this.fromHsl({h:n[0],s:n[1],l:n[2],a:n[3]})}fromRgbString(t){const n=ym(t,(r,o)=>o.includes("%")?kr(r/100*255):r);this.r=n[0],this.g=n[1],this.b=n[2],this.a=n[3]}}var nz=["b"],rz=["v"],wm=function(t){return Math.round(Number(t||0))},oz=function(t){if(t instanceof Av)return t;if(t&&st(t)==="object"&&"h"in t&&"b"in t){var n=t,r=n.b,o=Mt(n,nz);return Z(Z({},o),{},{v:r})}return typeof t=="string"&&/hsb/.test(t)?t.replace(/hsb/,"hsv"):t},ed=function(e){Co(n,e);var t=Eo(n);function n(r){return Kn(this,n),t.call(this,oz(r))}return qn(n,[{key:"toHsbString",value:function(){var o=this.toHsb(),i=wm(o.s*100),a=wm(o.b*100),s=wm(o.h),c=o.a,u="hsb(".concat(s,", ").concat(i,"%, ").concat(a,"%)"),p="hsba(".concat(s,", ").concat(i,"%, ").concat(a,"%, ").concat(c.toFixed(c===0?0:2),")");return c===1?u:p}},{key:"toHsb",value:function(){var o=this.toHsv(),i=o.v,a=Mt(o,rz);return Z(Z({},a),{},{b:i,a:this.a})}}]),n}(Av),iz=function(t){return t instanceof ed?t:new ed(t)};iz("#1677ff");const az=(e,t)=>(e==null?void 0:e.replace(/[^\w/]/g,"").slice(0,t?8:6))||"",sz=(e,t)=>e?az(e,t):"";let lz=function(){function e(t){Kn(this,e);var n;if(this.cleared=!1,t instanceof e){this.metaColor=t.metaColor.clone(),this.colors=(n=t.colors)===null||n===void 0?void 0:n.map(o=>({color:new e(o.color),percent:o.percent})),this.cleared=t.cleared;return}const r=Array.isArray(t);r&&t.length?(this.colors=t.map(o=>{let{color:i,percent:a}=o;return{color:new e(i),percent:a}}),this.metaColor=new ed(this.colors[0].color.metaColor)):this.metaColor=new ed(r?"":t),(!t||r&&!this.colors)&&(this.metaColor=this.metaColor.setA(0),this.cleared=!0)}return qn(e,[{key:"toHsb",value:function(){return this.metaColor.toHsb()}},{key:"toHsbString",value:function(){return this.metaColor.toHsbString()}},{key:"toHex",value:function(){return sz(this.toHexString(),this.metaColor.a<1)}},{key:"toHexString",value:function(){return this.metaColor.toHexString()}},{key:"toRgb",value:function(){return this.metaColor.toRgb()}},{key:"toRgbString",value:function(){return this.metaColor.toRgbString()}},{key:"isGradient",value:function(){return!!this.colors&&!this.cleared}},{key:"getColors",value:function(){return this.colors||[{color:this,percent:0}]}},{key:"toCssString",value:function(){const{colors:n}=this;return n?`linear-gradient(90deg, ${n.map(o=>`${o.color.toRgbString()} ${o.percent}%`).join(", ")})`:this.metaColor.toRgbString()}},{key:"equals",value:function(n){return!n||this.isGradient()!==n.isGradient()?!1:this.isGradient()?this.colors.length===n.colors.length&&this.colors.every((r,o)=>{const i=n.colors[o];return r.percent===i.percent&&r.color.equals(i.color)}):this.toHexString()===n.toHexString()}}])}();var cz={icon:{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M765.7 486.8L314.9 134.7A7.97 7.97 0 00302 141v77.3c0 4.9 2.3 9.6 6.1 12.6l360 281.1-360 281.1c-3.9 3-6.1 7.7-6.1 12.6V883c0 6.7 7.7 10.4 12.9 6.3l450.8-352.1a31.96 31.96 0 000-50.4z"}}]},name:"right",theme:"outlined"},uz=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:cz}))},Yp=d.forwardRef(uz);const zv=e=>({[e.componentCls]:{[`${e.antCls}-motion-collapse-legacy`]:{overflow:"hidden","&-active":{transition:`height ${e.motionDurationMid} ${e.motionEaseInOut}, - opacity ${e.motionDurationMid} ${e.motionEaseInOut} !important`}},[`${e.antCls}-motion-collapse`]:{overflow:"hidden",transition:`height ${e.motionDurationMid} ${e.motionEaseInOut}, - opacity ${e.motionDurationMid} ${e.motionEaseInOut} !important`}}}),dz=e=>({animationDuration:e,animationFillMode:"both"}),fz=e=>({animationDuration:e,animationFillMode:"both"}),Hv=function(e,t,n,r){const i=(arguments.length>4&&arguments[4]!==void 0?arguments[4]:!1)?"&":"";return{[` - ${i}${e}-enter, - ${i}${e}-appear - `]:Object.assign(Object.assign({},dz(r)),{animationPlayState:"paused"}),[`${i}${e}-leave`]:Object.assign(Object.assign({},fz(r)),{animationPlayState:"paused"}),[` - ${i}${e}-enter${e}-enter-active, - ${i}${e}-appear${e}-appear-active - `]:{animationName:t,animationPlayState:"running"},[`${i}${e}-leave${e}-leave-active`]:{animationName:n,animationPlayState:"running",pointerEvents:"none"}}},pz=new fn("antFadeIn",{"0%":{opacity:0},"100%":{opacity:1}}),vz=new fn("antFadeOut",{"0%":{opacity:1},"100%":{opacity:0}}),aT=function(e){let t=arguments.length>1&&arguments[1]!==void 0?arguments[1]:!1;const{antCls:n}=e,r=`${n}-fade`,o=t?"&":"";return[Hv(r,pz,vz,e.motionDurationMid,t),{[` - ${o}${r}-enter, - ${o}${r}-appear - `]:{opacity:0,animationTimingFunction:"linear"},[`${o}${r}-leave`]:{animationTimingFunction:"linear"}}]},hz=new fn("antMoveDownIn",{"0%":{transform:"translate3d(0, 100%, 0)",transformOrigin:"0 0",opacity:0},"100%":{transform:"translate3d(0, 0, 0)",transformOrigin:"0 0",opacity:1}}),gz=new fn("antMoveDownOut",{"0%":{transform:"translate3d(0, 0, 0)",transformOrigin:"0 0",opacity:1},"100%":{transform:"translate3d(0, 100%, 0)",transformOrigin:"0 0",opacity:0}}),mz=new fn("antMoveLeftIn",{"0%":{transform:"translate3d(-100%, 0, 0)",transformOrigin:"0 0",opacity:0},"100%":{transform:"translate3d(0, 0, 0)",transformOrigin:"0 0",opacity:1}}),bz=new fn("antMoveLeftOut",{"0%":{transform:"translate3d(0, 0, 0)",transformOrigin:"0 0",opacity:1},"100%":{transform:"translate3d(-100%, 0, 0)",transformOrigin:"0 0",opacity:0}}),yz=new fn("antMoveRightIn",{"0%":{transform:"translate3d(100%, 0, 0)",transformOrigin:"0 0",opacity:0},"100%":{transform:"translate3d(0, 0, 0)",transformOrigin:"0 0",opacity:1}}),wz=new fn("antMoveRightOut",{"0%":{transform:"translate3d(0, 0, 0)",transformOrigin:"0 0",opacity:1},"100%":{transform:"translate3d(100%, 0, 0)",transformOrigin:"0 0",opacity:0}}),xz=new fn("antMoveUpIn",{"0%":{transform:"translate3d(0, -100%, 0)",transformOrigin:"0 0",opacity:0},"100%":{transform:"translate3d(0, 0, 0)",transformOrigin:"0 0",opacity:1}}),Sz=new fn("antMoveUpOut",{"0%":{transform:"translate3d(0, 0, 0)",transformOrigin:"0 0",opacity:1},"100%":{transform:"translate3d(0, -100%, 0)",transformOrigin:"0 0",opacity:0}}),Cz={"move-up":{inKeyframes:xz,outKeyframes:Sz},"move-down":{inKeyframes:hz,outKeyframes:gz},"move-left":{inKeyframes:mz,outKeyframes:bz},"move-right":{inKeyframes:yz,outKeyframes:wz}},Qp=(e,t)=>{const{antCls:n}=e,r=`${n}-${t}`,{inKeyframes:o,outKeyframes:i}=Cz[t];return[Hv(r,o,i,e.motionDurationMid),{[` - ${r}-enter, - ${r}-appear - `]:{opacity:0,animationTimingFunction:e.motionEaseOutCirc},[`${r}-leave`]:{animationTimingFunction:e.motionEaseInOutCirc}}]},tw=new fn("antSlideUpIn",{"0%":{transform:"scaleY(0.8)",transformOrigin:"0% 0%",opacity:0},"100%":{transform:"scaleY(1)",transformOrigin:"0% 0%",opacity:1}}),nw=new fn("antSlideUpOut",{"0%":{transform:"scaleY(1)",transformOrigin:"0% 0%",opacity:1},"100%":{transform:"scaleY(0.8)",transformOrigin:"0% 0%",opacity:0}}),rw=new fn("antSlideDownIn",{"0%":{transform:"scaleY(0.8)",transformOrigin:"100% 100%",opacity:0},"100%":{transform:"scaleY(1)",transformOrigin:"100% 100%",opacity:1}}),ow=new fn("antSlideDownOut",{"0%":{transform:"scaleY(1)",transformOrigin:"100% 100%",opacity:1},"100%":{transform:"scaleY(0.8)",transformOrigin:"100% 100%",opacity:0}}),Ez=new fn("antSlideLeftIn",{"0%":{transform:"scaleX(0.8)",transformOrigin:"0% 0%",opacity:0},"100%":{transform:"scaleX(1)",transformOrigin:"0% 0%",opacity:1}}),kz=new fn("antSlideLeftOut",{"0%":{transform:"scaleX(1)",transformOrigin:"0% 0%",opacity:1},"100%":{transform:"scaleX(0.8)",transformOrigin:"0% 0%",opacity:0}}),Oz=new fn("antSlideRightIn",{"0%":{transform:"scaleX(0.8)",transformOrigin:"100% 0%",opacity:0},"100%":{transform:"scaleX(1)",transformOrigin:"100% 0%",opacity:1}}),$z=new fn("antSlideRightOut",{"0%":{transform:"scaleX(1)",transformOrigin:"100% 0%",opacity:1},"100%":{transform:"scaleX(0.8)",transformOrigin:"100% 0%",opacity:0}}),Iz={"slide-up":{inKeyframes:tw,outKeyframes:nw},"slide-down":{inKeyframes:rw,outKeyframes:ow},"slide-left":{inKeyframes:Ez,outKeyframes:kz},"slide-right":{inKeyframes:Oz,outKeyframes:$z}},Gl=(e,t)=>{const{antCls:n}=e,r=`${n}-${t}`,{inKeyframes:o,outKeyframes:i}=Iz[t];return[Hv(r,o,i,e.motionDurationMid),{[` - ${r}-enter, - ${r}-appear - `]:{transform:"scale(0)",transformOrigin:"0% 0%",opacity:0,animationTimingFunction:e.motionEaseOutQuint,"&-prepare":{transform:"scale(1)"}},[`${r}-leave`]:{animationTimingFunction:e.motionEaseInQuint}}]},iw=new fn("antZoomIn",{"0%":{transform:"scale(0.2)",opacity:0},"100%":{transform:"scale(1)",opacity:1}}),Tz=new fn("antZoomOut",{"0%":{transform:"scale(1)"},"100%":{transform:"scale(0.2)",opacity:0}}),fE=new fn("antZoomBigIn",{"0%":{transform:"scale(0.8)",opacity:0},"100%":{transform:"scale(1)",opacity:1}}),pE=new fn("antZoomBigOut",{"0%":{transform:"scale(1)"},"100%":{transform:"scale(0.8)",opacity:0}}),Pz=new fn("antZoomUpIn",{"0%":{transform:"scale(0.8)",transformOrigin:"50% 0%",opacity:0},"100%":{transform:"scale(1)",transformOrigin:"50% 0%"}}),Mz=new fn("antZoomUpOut",{"0%":{transform:"scale(1)",transformOrigin:"50% 0%"},"100%":{transform:"scale(0.8)",transformOrigin:"50% 0%",opacity:0}}),Nz=new fn("antZoomLeftIn",{"0%":{transform:"scale(0.8)",transformOrigin:"0% 50%",opacity:0},"100%":{transform:"scale(1)",transformOrigin:"0% 50%"}}),Rz=new fn("antZoomLeftOut",{"0%":{transform:"scale(1)",transformOrigin:"0% 50%"},"100%":{transform:"scale(0.8)",transformOrigin:"0% 50%",opacity:0}}),Dz=new fn("antZoomRightIn",{"0%":{transform:"scale(0.8)",transformOrigin:"100% 50%",opacity:0},"100%":{transform:"scale(1)",transformOrigin:"100% 50%"}}),jz=new fn("antZoomRightOut",{"0%":{transform:"scale(1)",transformOrigin:"100% 50%"},"100%":{transform:"scale(0.8)",transformOrigin:"100% 50%",opacity:0}}),Lz=new fn("antZoomDownIn",{"0%":{transform:"scale(0.8)",transformOrigin:"50% 100%",opacity:0},"100%":{transform:"scale(1)",transformOrigin:"50% 100%"}}),Bz=new fn("antZoomDownOut",{"0%":{transform:"scale(1)",transformOrigin:"50% 100%"},"100%":{transform:"scale(0.8)",transformOrigin:"50% 100%",opacity:0}}),Az={zoom:{inKeyframes:iw,outKeyframes:Tz},"zoom-big":{inKeyframes:fE,outKeyframes:pE},"zoom-big-fast":{inKeyframes:fE,outKeyframes:pE},"zoom-left":{inKeyframes:Nz,outKeyframes:Rz},"zoom-right":{inKeyframes:Dz,outKeyframes:jz},"zoom-up":{inKeyframes:Pz,outKeyframes:Mz},"zoom-down":{inKeyframes:Lz,outKeyframes:Bz}},Sd=(e,t)=>{const{antCls:n}=e,r=`${n}-${t}`,{inKeyframes:o,outKeyframes:i}=Az[t];return[Hv(r,o,i,t==="zoom-big-fast"?e.motionDurationFast:e.motionDurationMid),{[` - ${r}-enter, - ${r}-appear - `]:{transform:"scale(0)",opacity:0,animationTimingFunction:e.motionEaseOutCirc,"&-prepare":{transform:"none"}},[`${r}-leave`]:{animationTimingFunction:e.motionEaseInOutCirc}}]},zz=(e,t)=>{const{r:n,g:r,b:o,a:i}=e.toRgb(),a=new ed(e.toRgbString()).onBackground(t).toHsv();return i<=.5?a.v>.5:n*.299+r*.587+o*.114>192},sT=e=>{const{paddingInline:t,onlyIconSize:n,paddingBlock:r}=e;return vn(e,{buttonPaddingHorizontal:t,buttonPaddingVertical:r,buttonIconOnlyFontSize:n})},lT=e=>{var t,n,r,o,i,a;const s=(t=e.contentFontSize)!==null&&t!==void 0?t:e.fontSize,c=(n=e.contentFontSizeSM)!==null&&n!==void 0?n:e.fontSize,u=(r=e.contentFontSizeLG)!==null&&r!==void 0?r:e.fontSizeLG,p=(o=e.contentLineHeight)!==null&&o!==void 0?o:Mp(s),v=(i=e.contentLineHeightSM)!==null&&i!==void 0?i:Mp(c),h=(a=e.contentLineHeightLG)!==null&&a!==void 0?a:Mp(u),m=zz(new lz(e.colorBgSolid),"#fff")?"#000":"#fff";return{fontWeight:400,defaultShadow:`0 ${e.controlOutlineWidth}px 0 ${e.controlTmpOutline}`,primaryShadow:`0 ${e.controlOutlineWidth}px 0 ${e.controlOutline}`,dangerShadow:`0 ${e.controlOutlineWidth}px 0 ${e.colorErrorOutline}`,primaryColor:e.colorTextLightSolid,dangerColor:e.colorTextLightSolid,borderColorDisabled:e.colorBorder,defaultGhostColor:e.colorBgContainer,ghostBg:"transparent",defaultGhostBorderColor:e.colorBgContainer,paddingInline:e.paddingContentHorizontal-e.lineWidth,paddingInlineLG:e.paddingContentHorizontal-e.lineWidth,paddingInlineSM:8-e.lineWidth,onlyIconSize:e.fontSizeLG,onlyIconSizeSM:e.fontSizeLG-2,onlyIconSizeLG:e.fontSizeLG+2,groupBorderColor:e.colorPrimaryHover,linkHoverBg:"transparent",textTextColor:e.colorText,textTextHoverColor:e.colorText,textTextActiveColor:e.colorText,textHoverBg:e.colorFillTertiary,defaultColor:e.colorText,defaultBg:e.colorBgContainer,defaultBorderColor:e.colorBorder,defaultBorderColorDisabled:e.colorBorder,defaultHoverBg:e.colorBgContainer,defaultHoverColor:e.colorPrimaryHover,defaultHoverBorderColor:e.colorPrimaryHover,defaultActiveBg:e.colorBgContainer,defaultActiveColor:e.colorPrimaryActive,defaultActiveBorderColor:e.colorPrimaryActive,solidTextColor:m,contentFontSize:s,contentFontSizeSM:c,contentFontSizeLG:u,contentLineHeight:p,contentLineHeightSM:v,contentLineHeightLG:h,paddingBlock:Math.max((e.controlHeight-s*p)/2-e.lineWidth,0),paddingBlockSM:Math.max((e.controlHeightSM-c*v)/2-e.lineWidth,0),paddingBlockLG:Math.max((e.controlHeightLG-u*h)/2-e.lineWidth,0)}},Hz=e=>{const{componentCls:t,iconCls:n,fontWeight:r}=e;return{[t]:{outline:"none",position:"relative",display:"inline-flex",gap:e.marginXS,alignItems:"center",justifyContent:"center",fontWeight:r,whiteSpace:"nowrap",textAlign:"center",backgroundImage:"none",background:"transparent",border:`${de(e.lineWidth)} ${e.lineType} transparent`,cursor:"pointer",transition:`all ${e.motionDurationMid} ${e.motionEaseInOut}`,userSelect:"none",touchAction:"manipulation",color:e.colorText,"&:disabled > *":{pointerEvents:"none"},"> span":{display:"inline-flex"},[`${t}-icon`]:{lineHeight:1},"> a":{color:"currentColor"},"&:not(:disabled)":Object.assign({},Xl(e)),[`&${t}-two-chinese-chars::first-letter`]:{letterSpacing:"0.34em"},[`&${t}-two-chinese-chars > *:not(${n})`]:{marginInlineEnd:"-0.34em",letterSpacing:"0.34em"},"&-icon-end":{flexDirection:"row-reverse"}}}},cT=(e,t,n)=>({[`&:not(:disabled):not(${e}-disabled)`]:{"&:hover":t,"&:active":n}}),Fz=e=>({minWidth:e.controlHeight,paddingInlineStart:0,paddingInlineEnd:0,borderRadius:"50%"}),_z=e=>({borderRadius:e.controlHeight,paddingInlineStart:e.calc(e.controlHeight).div(2).equal(),paddingInlineEnd:e.calc(e.controlHeight).div(2).equal()}),Vz=e=>({cursor:"not-allowed",borderColor:e.borderColorDisabled,color:e.colorTextDisabled,background:e.colorBgContainerDisabled,boxShadow:"none"}),aw=(e,t,n,r,o,i,a,s)=>({[`&${e}-background-ghost`]:Object.assign(Object.assign({color:n||void 0,background:t,borderColor:r||void 0,boxShadow:"none"},cT(e,Object.assign({background:t},a),Object.assign({background:t},s))),{"&:disabled":{cursor:"not-allowed",color:o||void 0,borderColor:i||void 0}})}),Wz=e=>({[`&:disabled, &${e.componentCls}-disabled`]:Object.assign({},Vz(e))}),Uz=e=>({[`&:disabled, &${e.componentCls}-disabled`]:{cursor:"not-allowed",color:e.colorTextDisabled}}),Fv=(e,t,n,r)=>{const i=r&&["link","text"].includes(r)?Uz:Wz;return Object.assign(Object.assign({},i(e)),cT(e.componentCls,t,n))},sw=(e,t,n,r,o)=>({[`&${e.componentCls}-variant-solid`]:Object.assign({color:t,background:n},Fv(e,r,o))}),lw=(e,t,n,r,o)=>({[`&${e.componentCls}-variant-outlined, &${e.componentCls}-variant-dashed`]:Object.assign({borderColor:t,background:n},Fv(e,r,o))}),cw=e=>({[`&${e.componentCls}-variant-dashed`]:{borderStyle:"dashed"}}),uw=(e,t,n,r)=>({[`&${e.componentCls}-variant-filled`]:Object.assign({boxShadow:"none",background:t},Fv(e,n,r))}),Yl=(e,t,n,r,o)=>({[`&${e.componentCls}-variant-${n}`]:Object.assign({color:t,boxShadow:"none"},Fv(e,r,o,n))}),Kz=e=>Object.assign(Object.assign(Object.assign(Object.assign(Object.assign({color:e.defaultColor,boxShadow:e.defaultShadow},sw(e,e.solidTextColor,e.colorBgSolid,{background:e.colorBgSolidHover},{background:e.colorBgSolidActive})),cw(e)),uw(e,e.colorFillTertiary,{background:e.colorFillSecondary},{background:e.colorFill})),Yl(e,e.textTextColor,"link",{color:e.colorLinkHover,background:e.linkHoverBg},{color:e.colorLinkActive})),aw(e.componentCls,e.ghostBg,e.defaultGhostColor,e.defaultGhostBorderColor,e.colorTextDisabled,e.colorBorder)),qz=e=>Object.assign(Object.assign(Object.assign(Object.assign(Object.assign({color:e.colorPrimary,boxShadow:e.primaryShadow},lw(e,e.colorPrimary,e.colorBgContainer,{color:e.colorPrimaryTextHover,borderColor:e.colorPrimaryHover,background:e.colorBgContainer},{color:e.colorPrimaryTextActive,borderColor:e.colorPrimaryActive,background:e.colorBgContainer})),cw(e)),uw(e,e.colorPrimaryBg,{background:e.colorPrimaryBgHover},{background:e.colorPrimaryBorder})),Yl(e,e.colorLink,"text",{color:e.colorPrimaryTextHover,background:e.colorPrimaryBg},{color:e.colorPrimaryTextActive,background:e.colorPrimaryBorder})),aw(e.componentCls,e.ghostBg,e.colorPrimary,e.colorPrimary,e.colorTextDisabled,e.colorBorder,{color:e.colorPrimaryHover,borderColor:e.colorPrimaryHover},{color:e.colorPrimaryActive,borderColor:e.colorPrimaryActive})),Xz=e=>Object.assign(Object.assign(Object.assign(Object.assign(Object.assign(Object.assign(Object.assign({color:e.colorError,boxShadow:e.dangerShadow},sw(e,e.dangerColor,e.colorError,{background:e.colorErrorHover},{background:e.colorErrorActive})),lw(e,e.colorError,e.colorBgContainer,{color:e.colorErrorHover,borderColor:e.colorErrorBorderHover},{color:e.colorErrorActive,borderColor:e.colorErrorActive})),cw(e)),uw(e,e.colorErrorBg,{background:e.colorErrorBgFilledHover},{background:e.colorErrorBgActive})),Yl(e,e.colorError,"text",{color:e.colorErrorHover,background:e.colorErrorBg},{color:e.colorErrorHover,background:e.colorErrorBgActive})),Yl(e,e.colorError,"link",{color:e.colorErrorHover},{color:e.colorErrorActive})),aw(e.componentCls,e.ghostBg,e.colorError,e.colorError,e.colorTextDisabled,e.colorBorder,{color:e.colorErrorHover,borderColor:e.colorErrorHover},{color:e.colorErrorActive,borderColor:e.colorErrorActive})),Gz=e=>{const{componentCls:t}=e;return{[`${t}-color-default`]:Kz(e),[`${t}-color-primary`]:qz(e),[`${t}-color-dangerous`]:Xz(e)}},Yz=e=>Object.assign(Object.assign(Object.assign(Object.assign({},lw(e,e.defaultBorderColor,e.defaultBg,{color:e.defaultHoverColor,borderColor:e.defaultHoverBorderColor,background:e.defaultHoverBg},{color:e.defaultActiveColor,borderColor:e.defaultActiveBorderColor,background:e.defaultActiveBg})),Yl(e,e.textTextColor,"text",{color:e.textTextHoverColor,background:e.textHoverBg},{color:e.textTextActiveColor,background:e.colorBgTextActive})),sw(e,e.primaryColor,e.colorPrimary,{background:e.colorPrimaryHover,color:e.primaryColor},{background:e.colorPrimaryActive,color:e.primaryColor})),Yl(e,e.colorLink,"link",{color:e.colorLinkHover,background:e.linkHoverBg},{color:e.colorLinkActive})),dw=function(e){let t=arguments.length>1&&arguments[1]!==void 0?arguments[1]:"";const{componentCls:n,controlHeight:r,fontSize:o,lineHeight:i,borderRadius:a,buttonPaddingHorizontal:s,iconCls:c,buttonPaddingVertical:u}=e,p=`${n}-icon-only`;return[{[t]:{fontSize:o,lineHeight:i,height:r,padding:`${de(u)} ${de(s)}`,borderRadius:a,[`&${p}`]:{width:r,paddingInline:0,[`&${n}-compact-item`]:{flex:"none"},[`&${n}-round`]:{width:"auto"},[c]:{fontSize:e.buttonIconOnlyFontSize}},[`&${n}-loading`]:{opacity:e.opacityLoading,cursor:"default"},[`${n}-loading-icon`]:{transition:`width ${e.motionDurationSlow} ${e.motionEaseInOut}, opacity ${e.motionDurationSlow} ${e.motionEaseInOut}`}}},{[`${n}${n}-circle${t}`]:Fz(e)},{[`${n}${n}-round${t}`]:_z(e)}]},Qz=e=>{const t=vn(e,{fontSize:e.contentFontSize,lineHeight:e.contentLineHeight});return dw(t,e.componentCls)},Zz=e=>{const t=vn(e,{controlHeight:e.controlHeightSM,fontSize:e.contentFontSizeSM,lineHeight:e.contentLineHeightSM,padding:e.paddingXS,buttonPaddingHorizontal:e.paddingInlineSM,buttonPaddingVertical:e.paddingBlockSM,borderRadius:e.borderRadiusSM,buttonIconOnlyFontSize:e.onlyIconSizeSM});return dw(t,`${e.componentCls}-sm`)},Jz=e=>{const t=vn(e,{controlHeight:e.controlHeightLG,fontSize:e.contentFontSizeLG,lineHeight:e.contentLineHeightLG,buttonPaddingHorizontal:e.paddingInlineLG,buttonPaddingVertical:e.paddingBlockLG,borderRadius:e.borderRadiusLG,buttonIconOnlyFontSize:e.onlyIconSizeLG});return dw(t,`${e.componentCls}-lg`)},e8=e=>{const{componentCls:t}=e;return{[t]:{[`&${t}-block`]:{width:"100%"}}}},t8=In("Button",e=>{const t=sT(e);return[Hz(t),Qz(t),Zz(t),Jz(t),e8(t),Gz(t),Yz(t),tz(t)]},lT,{unitless:{fontWeight:!0,contentLineHeight:!0,contentLineHeightSM:!0,contentLineHeightLG:!0}});function n8(e,t,n){const{focusElCls:r,focus:o,borderElCls:i}=n,a=i?"> *":"",s=["hover",o?"focus":null,"active"].filter(Boolean).map(c=>`&:${c} ${a}`).join(",");return{[`&-item:not(${t}-last-item)`]:{marginInlineEnd:e.calc(e.lineWidth).mul(-1).equal()},"&-item":Object.assign(Object.assign({[s]:{zIndex:2}},r?{[`&${r}`]:{zIndex:2}}:{}),{[`&[disabled] ${a}`]:{zIndex:0}})}}function r8(e,t,n){const{borderElCls:r}=n,o=r?`> ${r}`:"";return{[`&-item:not(${t}-first-item):not(${t}-last-item) ${o}`]:{borderRadius:0},[`&-item:not(${t}-last-item)${t}-first-item`]:{[`& ${o}, &${e}-sm ${o}, &${e}-lg ${o}`]:{borderStartEndRadius:0,borderEndEndRadius:0}},[`&-item:not(${t}-first-item)${t}-last-item`]:{[`& ${o}, &${e}-sm ${o}, &${e}-lg ${o}`]:{borderStartStartRadius:0,borderEndStartRadius:0}}}}function _v(e){let t=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{focus:!0};const{componentCls:n}=e,r=`${n}-compact`;return{[r]:Object.assign(Object.assign({},n8(e,r,t)),r8(n,r,t))}}function o8(e,t){return{[`&-item:not(${t}-last-item)`]:{marginBottom:e.calc(e.lineWidth).mul(-1).equal()},"&-item":{"&:hover,&:focus,&:active":{zIndex:2},"&[disabled]":{zIndex:0}}}}function i8(e,t){return{[`&-item:not(${t}-first-item):not(${t}-last-item)`]:{borderRadius:0},[`&-item${t}-first-item:not(${t}-last-item)`]:{[`&, &${e}-sm, &${e}-lg`]:{borderEndEndRadius:0,borderEndStartRadius:0}},[`&-item${t}-last-item:not(${t}-first-item)`]:{[`&, &${e}-sm, &${e}-lg`]:{borderStartStartRadius:0,borderStartEndRadius:0}}}}function a8(e){const t=`${e.componentCls}-compact-vertical`;return{[t]:Object.assign(Object.assign({},o8(e,t)),i8(e.componentCls,t))}}const s8=e=>{const{componentCls:t,calc:n}=e;return{[t]:{[`&-compact-item${t}-primary`]:{[`&:not([disabled]) + ${t}-compact-item${t}-primary:not([disabled])`]:{position:"relative","&:before":{position:"absolute",top:n(e.lineWidth).mul(-1).equal(),insetInlineStart:n(e.lineWidth).mul(-1).equal(),display:"inline-block",width:e.lineWidth,height:`calc(100% + ${de(e.lineWidth)} * 2)`,backgroundColor:e.colorPrimaryHover,content:'""'}}},"&-compact-vertical-item":{[`&${t}-primary`]:{[`&:not([disabled]) + ${t}-compact-vertical-item${t}-primary:not([disabled])`]:{position:"relative","&:before":{position:"absolute",top:n(e.lineWidth).mul(-1).equal(),insetInlineStart:n(e.lineWidth).mul(-1).equal(),display:"inline-block",width:`calc(100% + ${de(e.lineWidth)} * 2)`,height:e.lineWidth,backgroundColor:e.colorPrimaryHover,content:'""'}}}}}}},l8=ic(["Button","compact"],e=>{const t=sT(e);return[_v(t),a8(t),s8(t)]},lT);var c8=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{var n,r,o,i;const{loading:a=!1,prefixCls:s,color:c,variant:u,type:p,danger:v=!1,shape:h="default",size:m,styles:b,disabled:y,className:w,rootClassName:C,children:S,icon:E,iconPosition:k="start",ghost:O=!1,block:$=!1,htmlType:T="button",classNames:M,style:P={},autoInsertSpace:R}=e,A=c8(e,["loading","prefixCls","color","variant","type","danger","shape","size","styles","disabled","className","rootClassName","children","icon","iconPosition","ghost","block","htmlType","classNames","style","autoInsertSpace"]),V=p||"default",[z,B]=d.useMemo(()=>{if(c&&u)return[c,u];const Ye=d8[V]||[];return v?["danger",Ye[1]]:Ye},[p,c,u,v]),H=z==="danger"?"dangerous":z,{getPrefixCls:j,direction:L,button:F}=d.useContext(ht),U=(n=R??(F==null?void 0:F.autoInsertSpace))!==null&&n!==void 0?n:!0,D=j("btn",s),[W,G,q]=t8(D),J=d.useContext(So),Y=y??J,Q=d.useContext(oT),te=d.useMemo(()=>u8(a),[a]),[ce,se]=d.useState(te.loading),[ne,ae]=d.useState(!1),ee=d.createRef(),re=Wr(t,ee),le=d.Children.count(S)===1&&!E&&!gm(B);d.useEffect(()=>{let Ye=null;te.delay>0?Ye=setTimeout(()=>{Ye=null,se(!0)},te.delay):se(te.loading);function Ge(){Ye&&(clearTimeout(Ye),Ye=null)}return Ge},[te]),d.useEffect(()=>{if(!re||!re.current||!U)return;const Ye=re.current.textContent;le&&ub(Ye)?ne||ae(!0):ne&&ae(!1)},[re]);const pe=ue.useCallback(Ye=>{var Ge;if(ce||Y){Ye.preventDefault();return}(Ge=e.onClick)===null||Ge===void 0||Ge.call(e,Ye)},[e.onClick,ce,Y]),{compactSize:Oe,compactItemClassnames:ge}=lc(D,L),Re={large:"lg",small:"sm",middle:void 0},ye=Go(Ye=>{var Ge,Fe;return(Fe=(Ge=m??Oe)!==null&&Ge!==void 0?Ge:Q)!==null&&Fe!==void 0?Fe:Ye}),Te=ye&&(r=Re[ye])!==null&&r!==void 0?r:"",Ae=ce?"loading":E,me=Ln(A,["navigate"]),Ie=ie(D,G,q,{[`${D}-${h}`]:h!=="default"&&h,[`${D}-${V}`]:V,[`${D}-dangerous`]:v,[`${D}-color-${H}`]:H,[`${D}-variant-${B}`]:B,[`${D}-${Te}`]:Te,[`${D}-icon-only`]:!S&&S!==0&&!!Ae,[`${D}-background-ghost`]:O&&!gm(B),[`${D}-loading`]:ce,[`${D}-two-chinese-chars`]:ne&&U&&!ce,[`${D}-block`]:$,[`${D}-rtl`]:L==="rtl",[`${D}-icon-end`]:k==="end"},ge,w,C,F==null?void 0:F.className),Le=Object.assign(Object.assign({},F==null?void 0:F.style),P),Be=ie(M==null?void 0:M.icon,(o=F==null?void 0:F.classNames)===null||o===void 0?void 0:o.icon),et=Object.assign(Object.assign({},(b==null?void 0:b.icon)||{}),((i=F==null?void 0:F.styles)===null||i===void 0?void 0:i.icon)||{}),rt=E&&!ce?ue.createElement(iT,{prefixCls:D,className:Be,style:et},E):ue.createElement(ez,{existIcon:!!E,prefixCls:D,loading:ce}),Ze=S||S===0?J5(S,le&&U):null;if(me.href!==void 0)return W(ue.createElement("a",Object.assign({},me,{className:ie(Ie,{[`${D}-disabled`]:Y}),href:Y?void 0:me.href,style:Le,onClick:pe,ref:re,tabIndex:Y?-1:0}),rt,Ze));let Ve=ue.createElement("button",Object.assign({},A,{type:T,className:Ie,style:Le,onClick:pe,disabled:Y,ref:re}),rt,Ze,!!ge&&ue.createElement(l8,{key:"compact",prefixCls:D}));return gm(B)||(Ve=ue.createElement(Lv,{component:"Button",disabled:ce},Ve)),W(Ve)}),jr=f8;jr.Group=Q5;jr.__ANT_BUTTON=!0;function xm(e){return!!(e!=null&&e.then)}const fw=e=>{const{type:t,children:n,prefixCls:r,buttonProps:o,close:i,autoFocus:a,emitEvent:s,isSilent:c,quitOnNullishReturnValue:u,actionFn:p}=e,v=d.useRef(!1),h=d.useRef(null),[m,b]=Ts(!1),y=function(){i==null||i.apply(void 0,arguments)};d.useEffect(()=>{let S=null;return a&&(S=setTimeout(()=>{var E;(E=h.current)===null||E===void 0||E.focus()})),()=>{S&&clearTimeout(S)}},[]);const w=S=>{xm(S)&&(b(!0),S.then(function(){b(!1,!0),y.apply(void 0,arguments),v.current=!1},E=>{if(b(!1,!0),v.current=!1,!(c!=null&&c()))return Promise.reject(E)}))},C=S=>{if(v.current)return;if(v.current=!0,!p){y();return}let E;if(s){if(E=p(S),u&&!xm(E)){v.current=!1,y(S);return}}else if(p.length)E=p(i),v.current=!1;else if(E=p(),!xm(E)){y();return}w(E)};return d.createElement(jr,Object.assign({},ew(t),{onClick:C,loading:m,prefixCls:r},o,{ref:h}),n)},Cd=ue.createContext({}),{Provider:uT}=Cd,vE=()=>{const{autoFocusButton:e,cancelButtonProps:t,cancelTextLocale:n,isSilent:r,mergedOkCancel:o,rootPrefixCls:i,close:a,onCancel:s,onConfirm:c}=d.useContext(Cd);return o?ue.createElement(fw,{isSilent:r,actionFn:s,close:function(){a==null||a.apply(void 0,arguments),c==null||c(!1)},autoFocus:e==="cancel",buttonProps:t,prefixCls:`${i}-btn`},n):null},hE=()=>{const{autoFocusButton:e,close:t,isSilent:n,okButtonProps:r,rootPrefixCls:o,okTextLocale:i,okType:a,onConfirm:s,onOk:c}=d.useContext(Cd);return ue.createElement(fw,{isSilent:n,type:a||"primary",actionFn:c,close:function(){t==null||t.apply(void 0,arguments),s==null||s(!0)},autoFocus:e==="ok",buttonProps:r,prefixCls:`${o}-btn`},i)};var dT=d.createContext(null),gE=[];function p8(e,t){var n=d.useState(function(){if(!$r())return null;var b=document.createElement("div");return b}),r=ve(n,1),o=r[0],i=d.useRef(!1),a=d.useContext(dT),s=d.useState(gE),c=ve(s,2),u=c[0],p=c[1],v=a||(i.current?void 0:function(b){p(function(y){var w=[b].concat(Se(y));return w})});function h(){o.parentElement||document.body.appendChild(o),i.current=!0}function m(){var b;(b=o.parentElement)===null||b===void 0||b.removeChild(o),i.current=!1}return sn(function(){return e?a?a(h):h():m(),m},[e]),sn(function(){u.length&&(u.forEach(function(b){return b()}),p(gE))},[u]),[o,v]}var Sm;function fT(e){var t="rc-scrollbar-measure-".concat(Math.random().toString(36).substring(7)),n=document.createElement("div");n.id=t;var r=n.style;r.position="absolute",r.left="0",r.top="0",r.width="100px",r.height="100px",r.overflow="scroll";var o,i;if(e){var a=getComputedStyle(e);r.scrollbarColor=a.scrollbarColor,r.scrollbarWidth=a.scrollbarWidth;var s=getComputedStyle(e,"::-webkit-scrollbar"),c=parseInt(s.width,10),u=parseInt(s.height,10);try{var p=c?"width: ".concat(s.width,";"):"",v=u?"height: ".concat(s.height,";"):"";ea(` -#`.concat(t,`::-webkit-scrollbar { -`).concat(p,` -`).concat(v,` -}`),t)}catch{o=c,i=u}}document.body.appendChild(n);var h=e&&o&&!isNaN(o)?o:n.offsetWidth-n.clientWidth,m=e&&i&&!isNaN(i)?i:n.offsetHeight-n.clientHeight;return document.body.removeChild(n),Ku(t),{width:h,height:m}}function mE(e){return typeof document>"u"?0:(Sm===void 0&&(Sm=fT()),Sm.width)}function db(e){return typeof document>"u"||!e||!(e instanceof Element)?{width:0,height:0}:fT(e)}function v8(){return document.body.scrollHeight>(window.innerHeight||document.documentElement.clientHeight)&&window.innerWidth>document.body.offsetWidth}var h8="rc-util-locker-".concat(Date.now()),bE=0;function g8(e){var t=!!e,n=d.useState(function(){return bE+=1,"".concat(h8,"_").concat(bE)}),r=ve(n,1),o=r[0];sn(function(){if(t){var i=db(document.body).width,a=v8();ea(` -html body { - overflow-y: hidden; - `.concat(a?"width: calc(100% - ".concat(i,"px);"):"",` -}`),o)}else Ku(o);return function(){Ku(o)}},[t,o])}var m8=!1;function b8(e){return m8}var yE=function(t){return t===!1?!1:!$r()||!t?null:typeof t=="string"?document.querySelector(t):typeof t=="function"?t():t},pw=d.forwardRef(function(e,t){var n=e.open,r=e.autoLock,o=e.getContainer;e.debug;var i=e.autoDestroy,a=i===void 0?!0:i,s=e.children,c=d.useState(n),u=ve(c,2),p=u[0],v=u[1],h=p||n;d.useEffect(function(){(a||n)&&v(n)},[n,a]);var m=d.useState(function(){return yE(o)}),b=ve(m,2),y=b[0],w=b[1];d.useEffect(function(){var A=yE(o);w(A??null)});var C=p8(h&&!y),S=ve(C,2),E=S[0],k=S[1],O=y??E;g8(r&&n&&$r()&&(O===E||O===document.body));var $=null;if(s&&vi(s)&&t){var T=s;$=T.ref}var M=Bs($,t);if(!h||!$r()||y===void 0)return null;var P=O===!1||b8(),R=s;return t&&(R=d.cloneElement(s,{ref:M})),d.createElement(dT.Provider,{value:k},P?R:pi.createPortal(R,O))}),pT=d.createContext({});function y8(){var e=Z({},Ev);return e.useId}var wE=0,xE=y8();const vT=xE?function(t){var n=xE();return t||n}:function(t){var n=d.useState("ssr-id"),r=ve(n,2),o=r[0],i=r[1];return d.useEffect(function(){var a=wE;wE+=1,i("rc_unique_".concat(a))},[]),t||o};function SE(e,t,n){var r=t;return!r&&n&&(r="".concat(e,"-").concat(n)),r}function CE(e,t){var n=e["page".concat(t?"Y":"X","Offset")],r="scroll".concat(t?"Top":"Left");if(typeof n!="number"){var o=e.document;n=o.documentElement[r],typeof n!="number"&&(n=o.body[r])}return n}function w8(e){var t=e.getBoundingClientRect(),n={left:t.left,top:t.top},r=e.ownerDocument,o=r.defaultView||r.parentWindow;return n.left+=CE(o),n.top+=CE(o,!0),n}const x8=d.memo(function(e){var t=e.children;return t},function(e,t){var n=t.shouldUpdate;return!n});var S8={width:0,height:0,overflow:"hidden",outline:"none"},C8={outline:"none"},hT=ue.forwardRef(function(e,t){var n=e.prefixCls,r=e.className,o=e.style,i=e.title,a=e.ariaId,s=e.footer,c=e.closable,u=e.closeIcon,p=e.onClose,v=e.children,h=e.bodyStyle,m=e.bodyProps,b=e.modalRender,y=e.onMouseDown,w=e.onMouseUp,C=e.holderRef,S=e.visible,E=e.forceRender,k=e.width,O=e.height,$=e.classNames,T=e.styles,M=ue.useContext(pT),P=M.panel,R=Bs(C,P),A=d.useRef(),V=d.useRef();ue.useImperativeHandle(t,function(){return{focus:function(){var W;(W=A.current)===null||W===void 0||W.focus({preventScroll:!0})},changeActive:function(W){var G=document,q=G.activeElement;W&&q===V.current?A.current.focus({preventScroll:!0}):!W&&q===A.current&&V.current.focus({preventScroll:!0})}}});var z={};k!==void 0&&(z.width=k),O!==void 0&&(z.height=O);var B=s?ue.createElement("div",{className:ie("".concat(n,"-footer"),$==null?void 0:$.footer),style:Z({},T==null?void 0:T.footer)},s):null,_=i?ue.createElement("div",{className:ie("".concat(n,"-header"),$==null?void 0:$.header),style:Z({},T==null?void 0:T.header)},ue.createElement("div",{className:"".concat(n,"-title"),id:a},i)):null,H=d.useMemo(function(){return st(c)==="object"&&c!==null?c:c?{closeIcon:u??ue.createElement("span",{className:"".concat(n,"-close-x")})}:{}},[c,u,n]),j=Gr(H,!0),L=st(c)==="object"&&c.disabled,F=c?ue.createElement("button",$e({type:"button",onClick:p,"aria-label":"Close"},j,{className:"".concat(n,"-close"),disabled:L}),H.closeIcon):null,U=ue.createElement("div",{className:ie("".concat(n,"-content"),$==null?void 0:$.content),style:T==null?void 0:T.content},F,_,ue.createElement("div",$e({className:ie("".concat(n,"-body"),$==null?void 0:$.body),style:Z(Z({},h),T==null?void 0:T.body)},m),v),B);return ue.createElement("div",{key:"dialog-element",role:"dialog","aria-labelledby":i?a:null,"aria-modal":"true",ref:R,style:Z(Z({},o),z),className:ie(n,r),onMouseDown:y,onMouseUp:w},ue.createElement("div",{ref:A,tabIndex:0,style:C8},ue.createElement(x8,{shouldUpdate:S||E},b?b(U):U)),ue.createElement("div",{tabIndex:0,ref:V,style:S8}))}),gT=d.forwardRef(function(e,t){var n=e.prefixCls,r=e.title,o=e.style,i=e.className,a=e.visible,s=e.forceRender,c=e.destroyOnClose,u=e.motionName,p=e.ariaId,v=e.onVisibleChanged,h=e.mousePosition,m=d.useRef(),b=d.useState(),y=ve(b,2),w=y[0],C=y[1],S={};w&&(S.transformOrigin=w);function E(){var k=w8(m.current);C(h&&(h.x||h.y)?"".concat(h.x-k.left,"px ").concat(h.y-k.top,"px"):"")}return d.createElement(Xo,{visible:a,onVisibleChanged:v,onAppearPrepare:E,onEnterPrepare:E,forceRender:s,motionName:u,removeOnLeave:c,ref:m},function(k,O){var $=k.className,T=k.style;return d.createElement(hT,$e({},e,{ref:t,title:r,ariaId:p,prefixCls:n,holderRef:O,style:Z(Z(Z({},T),o),S),className:ie(i,$)}))})});gT.displayName="Content";var E8=function(t){var n=t.prefixCls,r=t.style,o=t.visible,i=t.maskProps,a=t.motionName,s=t.className;return d.createElement(Xo,{key:"mask",visible:o,motionName:a,leavedClassName:"".concat(n,"-mask-hidden")},function(c,u){var p=c.className,v=c.style;return d.createElement("div",$e({ref:u,style:Z(Z({},v),r),className:ie("".concat(n,"-mask"),p,s)},i))})},k8=function(t){var n=t.prefixCls,r=n===void 0?"rc-dialog":n,o=t.zIndex,i=t.visible,a=i===void 0?!1:i,s=t.keyboard,c=s===void 0?!0:s,u=t.focusTriggerAfterClose,p=u===void 0?!0:u,v=t.wrapStyle,h=t.wrapClassName,m=t.wrapProps,b=t.onClose,y=t.afterOpenChange,w=t.afterClose,C=t.transitionName,S=t.animation,E=t.closable,k=E===void 0?!0:E,O=t.mask,$=O===void 0?!0:O,T=t.maskTransitionName,M=t.maskAnimation,P=t.maskClosable,R=P===void 0?!0:P,A=t.maskStyle,V=t.maskProps,z=t.rootClassName,B=t.classNames,_=t.styles,H=d.useRef(),j=d.useRef(),L=d.useRef(),F=d.useState(a),U=ve(F,2),D=U[0],W=U[1],G=vT();function q(){D0(j.current,document.activeElement)||(H.current=document.activeElement)}function J(){if(!D0(j.current,document.activeElement)){var le;(le=L.current)===null||le===void 0||le.focus()}}function Y(le){if(le)J();else{if(W(!1),$&&H.current&&p){try{H.current.focus({preventScroll:!0})}catch{}H.current=null}D&&(w==null||w())}y==null||y(le)}function Q(le){b==null||b(le)}var te=d.useRef(!1),ce=d.useRef(),se=function(){clearTimeout(ce.current),te.current=!0},ne=function(){ce.current=setTimeout(function(){te.current=!1})},ae=null;R&&(ae=function(pe){te.current?te.current=!1:j.current===pe.target&&Q(pe)});function ee(le){if(c&&le.keyCode===De.ESC){le.stopPropagation(),Q(le);return}a&&le.keyCode===De.TAB&&L.current.changeActive(!le.shiftKey)}d.useEffect(function(){a&&(W(!0),q())},[a]),d.useEffect(function(){return function(){clearTimeout(ce.current)}},[]);var re=Z(Z(Z({zIndex:o},v),_==null?void 0:_.wrapper),{},{display:D?null:"none"});return d.createElement("div",$e({className:ie("".concat(r,"-root"),z)},Gr(t,{data:!0})),d.createElement(E8,{prefixCls:r,visible:$&&a,motionName:SE(r,T,M),style:Z(Z({zIndex:o},A),_==null?void 0:_.mask),maskProps:V,className:B==null?void 0:B.mask}),d.createElement("div",$e({tabIndex:-1,onKeyDown:ee,className:ie("".concat(r,"-wrap"),h,B==null?void 0:B.wrapper),ref:j,onClick:ae,style:re},m),d.createElement(gT,$e({},t,{onMouseDown:se,onMouseUp:ne,ref:L,closable:k,ariaId:G,prefixCls:r,visible:a&&D,onClose:Q,onVisibleChanged:Y,motionName:SE(r,C,S)}))))},mT=function(t){var n=t.visible,r=t.getContainer,o=t.forceRender,i=t.destroyOnClose,a=i===void 0?!1:i,s=t.afterClose,c=t.panelRef,u=d.useState(n),p=ve(u,2),v=p[0],h=p[1],m=d.useMemo(function(){return{panel:c}},[c]);return d.useEffect(function(){n&&h(!0)},[n]),!o&&a&&!v?null:d.createElement(pT.Provider,{value:m},d.createElement(pw,{open:n||o||v,autoDestroy:!1,getContainer:r,autoLock:n||v},d.createElement(k8,$e({},t,{destroyOnClose:a,afterClose:function(){s==null||s(),h(!1)}}))))};mT.displayName="Dialog";var ys="RC_FORM_INTERNAL_HOOKS",kn=function(){Fn(!1,"Can not find FormContext. Please make sure you wrap Field under Form.")},Ms=d.createContext({getFieldValue:kn,getFieldsValue:kn,getFieldError:kn,getFieldWarning:kn,getFieldsError:kn,isFieldsTouched:kn,isFieldTouched:kn,isFieldValidating:kn,isFieldsValidating:kn,resetFields:kn,setFields:kn,setFieldValue:kn,setFieldsValue:kn,validateFields:kn,submit:kn,getInternalHooks:function(){return kn(),{dispatch:kn,initEntityValue:kn,registerField:kn,useSubscribe:kn,setInitialValues:kn,destroyForm:kn,setCallbacks:kn,registerWatch:kn,getFields:kn,setValidateMessages:kn,setPreserve:kn,getInitialValue:kn}}}),td=d.createContext(null);function fb(e){return e==null?[]:Array.isArray(e)?e:[e]}function O8(e){return e&&!!e._init}function pb(){return{default:"Validation error on field %s",required:"%s is required",enum:"%s must be one of %s",whitespace:"%s cannot be empty",date:{format:"%s date %s is invalid for format %s",parse:"%s date could not be parsed, %s is invalid ",invalid:"%s date %s is invalid"},types:{string:"%s is not a %s",method:"%s is not a %s (function)",array:"%s is not an %s",object:"%s is not an %s",number:"%s is not a %s",date:"%s is not a %s",boolean:"%s is not a %s",integer:"%s is not an %s",float:"%s is not a %s",regexp:"%s is not a valid %s",email:"%s is not a valid %s",url:"%s is not a valid %s",hex:"%s is not a valid %s"},string:{len:"%s must be exactly %s characters",min:"%s must be at least %s characters",max:"%s cannot be longer than %s characters",range:"%s must be between %s and %s characters"},number:{len:"%s must equal %s",min:"%s cannot be less than %s",max:"%s cannot be greater than %s",range:"%s must be between %s and %s"},array:{len:"%s must be exactly %s in length",min:"%s cannot be less than %s in length",max:"%s cannot be greater than %s in length",range:"%s must be between %s and %s in length"},pattern:{mismatch:"%s value %s does not match pattern %s"},clone:function(){var t=JSON.parse(JSON.stringify(this));return t.clone=this.clone,t}}}var vb=pb();function $8(e){try{return Function.toString.call(e).indexOf("[native code]")!==-1}catch{return typeof e=="function"}}function I8(e,t,n){if(Ly())return Reflect.construct.apply(null,arguments);var r=[null];r.push.apply(r,t);var o=new(e.bind.apply(e,r));return n&&Vu(o,n.prototype),o}function hb(e){var t=typeof Map=="function"?new Map:void 0;return hb=function(r){if(r===null||!$8(r))return r;if(typeof r!="function")throw new TypeError("Super expression must either be null or a function");if(t!==void 0){if(t.has(r))return t.get(r);t.set(r,o)}function o(){return I8(r,arguments,Wu(this).constructor)}return o.prototype=Object.create(r.prototype,{constructor:{value:o,enumerable:!1,writable:!0,configurable:!0}}),Vu(o,r)},hb(e)}var T8=/%[sdj%]/g,P8=function(){};typeof process<"u"&&process.env;function gb(e){if(!e||!e.length)return null;var t={};return e.forEach(function(n){var r=n.field;t[r]=t[r]||[],t[r].push(n)}),t}function wo(e){for(var t=arguments.length,n=new Array(t>1?t-1:0),r=1;r=i)return s;switch(s){case"%s":return String(n[o++]);case"%d":return Number(n[o++]);case"%j":try{return JSON.stringify(n[o++])}catch{return"[Circular]"}break;default:return s}});return a}return e}function M8(e){return e==="string"||e==="url"||e==="hex"||e==="email"||e==="date"||e==="pattern"}function yr(e,t){return!!(e==null||t==="array"&&Array.isArray(e)&&!e.length||M8(t)&&typeof e=="string"&&!e)}function N8(e,t,n){var r=[],o=0,i=e.length;function a(s){r.push.apply(r,Se(s||[])),o++,o===i&&n(r)}e.forEach(function(s){t(s,a)})}function EE(e,t,n){var r=0,o=e.length;function i(a){if(a&&a.length){n(a);return}var s=r;r+=1,st.max?o.push(wo(i.messages[v].max,t.fullField,t.max)):s&&c&&(pt.max)&&o.push(wo(i.messages[v].range,t.fullField,t.min,t.max))},bT=function(t,n,r,o,i,a){t.required&&(!r.hasOwnProperty(t.field)||yr(n,a||t.type))&&o.push(wo(i.messages.required,t.fullField))},Qf;const H8=function(){if(Qf)return Qf;var e="[a-fA-F\\d:]",t=function($){return $&&$.includeBoundaries?"(?:(?<=\\s|^)(?=".concat(e,")|(?<=").concat(e,")(?=\\s|$))"):""},n="(?:25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]\\d|\\d)(?:\\.(?:25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]\\d|\\d)){3}",r="[a-fA-F\\d]{1,4}",o=["(?:".concat(r,":){7}(?:").concat(r,"|:)"),"(?:".concat(r,":){6}(?:").concat(n,"|:").concat(r,"|:)"),"(?:".concat(r,":){5}(?::").concat(n,"|(?::").concat(r,"){1,2}|:)"),"(?:".concat(r,":){4}(?:(?::").concat(r,"){0,1}:").concat(n,"|(?::").concat(r,"){1,3}|:)"),"(?:".concat(r,":){3}(?:(?::").concat(r,"){0,2}:").concat(n,"|(?::").concat(r,"){1,4}|:)"),"(?:".concat(r,":){2}(?:(?::").concat(r,"){0,3}:").concat(n,"|(?::").concat(r,"){1,5}|:)"),"(?:".concat(r,":){1}(?:(?::").concat(r,"){0,4}:").concat(n,"|(?::").concat(r,"){1,6}|:)"),"(?::(?:(?::".concat(r,"){0,5}:").concat(n,"|(?::").concat(r,"){1,7}|:))")],i="(?:%[0-9a-zA-Z]{1,})?",a="(?:".concat(o.join("|"),")").concat(i),s=new RegExp("(?:^".concat(n,"$)|(?:^").concat(a,"$)")),c=new RegExp("^".concat(n,"$")),u=new RegExp("^".concat(a,"$")),p=function($){return $&&$.exact?s:new RegExp("(?:".concat(t($)).concat(n).concat(t($),")|(?:").concat(t($)).concat(a).concat(t($),")"),"g")};p.v4=function(O){return O&&O.exact?c:new RegExp("".concat(t(O)).concat(n).concat(t(O)),"g")},p.v6=function(O){return O&&O.exact?u:new RegExp("".concat(t(O)).concat(a).concat(t(O)),"g")};var v="(?:(?:[a-z]+:)?//)",h="(?:\\S+(?::\\S*)?@)?",m=p.v4().source,b=p.v6().source,y="(?:(?:[a-z\\u00a1-\\uffff0-9][-_]*)*[a-z\\u00a1-\\uffff0-9]+)",w="(?:\\.(?:[a-z\\u00a1-\\uffff0-9]-*)*[a-z\\u00a1-\\uffff0-9]+)*",C="(?:\\.(?:[a-z\\u00a1-\\uffff]{2,}))",S="(?::\\d{2,5})?",E='(?:[/?#][^\\s"]*)?',k="(?:".concat(v,"|www\\.)").concat(h,"(?:localhost|").concat(m,"|").concat(b,"|").concat(y).concat(w).concat(C,")").concat(S).concat(E);return Qf=new RegExp("(?:^".concat(k,"$)"),"i"),Qf};var IE={email:/^(([^<>()\[\]\\.,;:\s@"]+(\.[^<>()\[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}])|(([a-zA-Z\-0-9\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF]+\.)+[a-zA-Z\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF]{2,}))$/,hex:/^#?([a-f0-9]{6}|[a-f0-9]{3})$/i},du={integer:function(t){return du.number(t)&&parseInt(t,10)===t},float:function(t){return du.number(t)&&!du.integer(t)},array:function(t){return Array.isArray(t)},regexp:function(t){if(t instanceof RegExp)return!0;try{return!!new RegExp(t)}catch{return!1}},date:function(t){return typeof t.getTime=="function"&&typeof t.getMonth=="function"&&typeof t.getYear=="function"&&!isNaN(t.getTime())},number:function(t){return isNaN(t)?!1:typeof t=="number"},object:function(t){return st(t)==="object"&&!du.array(t)},method:function(t){return typeof t=="function"},email:function(t){return typeof t=="string"&&t.length<=320&&!!t.match(IE.email)},url:function(t){return typeof t=="string"&&t.length<=2048&&!!t.match(H8())},hex:function(t){return typeof t=="string"&&!!t.match(IE.hex)}},F8=function(t,n,r,o,i){if(t.required&&n===void 0){bT(t,n,r,o,i);return}var a=["integer","float","array","regexp","object","method","email","number","date","url","hex"],s=t.type;a.indexOf(s)>-1?du[s](n)||o.push(wo(i.messages.types[s],t.fullField,t.type)):s&&st(n)!==t.type&&o.push(wo(i.messages.types[s],t.fullField,t.type))},_8=function(t,n,r,o,i){(/^\s+$/.test(n)||n==="")&&o.push(wo(i.messages.whitespace,t.fullField))};const pn={required:bT,whitespace:_8,type:F8,range:z8,enum:B8,pattern:A8};var V8=function(t,n,r,o,i){var a=[],s=t.required||!t.required&&o.hasOwnProperty(t.field);if(s){if(yr(n)&&!t.required)return r();pn.required(t,n,o,a,i)}r(a)},W8=function(t,n,r,o,i){var a=[],s=t.required||!t.required&&o.hasOwnProperty(t.field);if(s){if(n==null&&!t.required)return r();pn.required(t,n,o,a,i,"array"),n!=null&&(pn.type(t,n,o,a,i),pn.range(t,n,o,a,i))}r(a)},U8=function(t,n,r,o,i){var a=[],s=t.required||!t.required&&o.hasOwnProperty(t.field);if(s){if(yr(n)&&!t.required)return r();pn.required(t,n,o,a,i),n!==void 0&&pn.type(t,n,o,a,i)}r(a)},K8=function(t,n,r,o,i){var a=[],s=t.required||!t.required&&o.hasOwnProperty(t.field);if(s){if(yr(n,"date")&&!t.required)return r();if(pn.required(t,n,o,a,i),!yr(n,"date")){var c;n instanceof Date?c=n:c=new Date(n),pn.type(t,c,o,a,i),c&&pn.range(t,c.getTime(),o,a,i)}}r(a)},q8="enum",X8=function(t,n,r,o,i){var a=[],s=t.required||!t.required&&o.hasOwnProperty(t.field);if(s){if(yr(n)&&!t.required)return r();pn.required(t,n,o,a,i),n!==void 0&&pn[q8](t,n,o,a,i)}r(a)},G8=function(t,n,r,o,i){var a=[],s=t.required||!t.required&&o.hasOwnProperty(t.field);if(s){if(yr(n)&&!t.required)return r();pn.required(t,n,o,a,i),n!==void 0&&(pn.type(t,n,o,a,i),pn.range(t,n,o,a,i))}r(a)},Y8=function(t,n,r,o,i){var a=[],s=t.required||!t.required&&o.hasOwnProperty(t.field);if(s){if(yr(n)&&!t.required)return r();pn.required(t,n,o,a,i),n!==void 0&&(pn.type(t,n,o,a,i),pn.range(t,n,o,a,i))}r(a)},Q8=function(t,n,r,o,i){var a=[],s=t.required||!t.required&&o.hasOwnProperty(t.field);if(s){if(yr(n)&&!t.required)return r();pn.required(t,n,o,a,i),n!==void 0&&pn.type(t,n,o,a,i)}r(a)},Z8=function(t,n,r,o,i){var a=[],s=t.required||!t.required&&o.hasOwnProperty(t.field);if(s){if(n===""&&(n=void 0),yr(n)&&!t.required)return r();pn.required(t,n,o,a,i),n!==void 0&&(pn.type(t,n,o,a,i),pn.range(t,n,o,a,i))}r(a)},J8=function(t,n,r,o,i){var a=[],s=t.required||!t.required&&o.hasOwnProperty(t.field);if(s){if(yr(n)&&!t.required)return r();pn.required(t,n,o,a,i),n!==void 0&&pn.type(t,n,o,a,i)}r(a)},eH=function(t,n,r,o,i){var a=[],s=t.required||!t.required&&o.hasOwnProperty(t.field);if(s){if(yr(n,"string")&&!t.required)return r();pn.required(t,n,o,a,i),yr(n,"string")||pn.pattern(t,n,o,a,i)}r(a)},tH=function(t,n,r,o,i){var a=[],s=t.required||!t.required&&o.hasOwnProperty(t.field);if(s){if(yr(n)&&!t.required)return r();pn.required(t,n,o,a,i),yr(n)||pn.type(t,n,o,a,i)}r(a)},nH=function(t,n,r,o,i){var a=[],s=Array.isArray(n)?"array":st(n);pn.required(t,n,o,a,i,s),r(a)},rH=function(t,n,r,o,i){var a=[],s=t.required||!t.required&&o.hasOwnProperty(t.field);if(s){if(yr(n,"string")&&!t.required)return r();pn.required(t,n,o,a,i,"string"),yr(n,"string")||(pn.type(t,n,o,a,i),pn.range(t,n,o,a,i),pn.pattern(t,n,o,a,i),t.whitespace===!0&&pn.whitespace(t,n,o,a,i))}r(a)},Cm=function(t,n,r,o,i){var a=t.type,s=[],c=t.required||!t.required&&o.hasOwnProperty(t.field);if(c){if(yr(n,a)&&!t.required)return r();pn.required(t,n,o,s,i,a),yr(n,a)||pn.type(t,n,o,s,i)}r(s)};const Cu={string:rH,method:Q8,number:Z8,boolean:U8,regexp:tH,integer:Y8,float:G8,array:W8,object:J8,enum:X8,pattern:eH,date:K8,url:Cm,hex:Cm,email:Cm,required:nH,any:V8};var Ed=function(){function e(t){Kn(this,e),K(this,"rules",null),K(this,"_messages",vb),this.define(t)}return qn(e,[{key:"define",value:function(n){var r=this;if(!n)throw new Error("Cannot configure a schema with no rules");if(st(n)!=="object"||Array.isArray(n))throw new Error("Rules must be an object");this.rules={},Object.keys(n).forEach(function(o){var i=n[o];r.rules[o]=Array.isArray(i)?i:[i]})}},{key:"messages",value:function(n){return n&&(this._messages=$E(pb(),n)),this._messages}},{key:"validate",value:function(n){var r=this,o=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{},i=arguments.length>2&&arguments[2]!==void 0?arguments[2]:function(){},a=n,s=o,c=i;if(typeof s=="function"&&(c=s,s={}),!this.rules||Object.keys(this.rules).length===0)return c&&c(null,a),Promise.resolve(a);function u(b){var y=[],w={};function C(E){if(Array.isArray(E)){var k;y=(k=y).concat.apply(k,Se(E))}else y.push(E)}for(var S=0;S0&&arguments[0]!==void 0?arguments[0]:[],M=Array.isArray(T)?T:[T];!s.suppressWarning&&M.length&&e.warning("async-validator:",M),M.length&&w.message!==void 0&&(M=[].concat(w.message));var P=M.map(OE(w,a));if(s.first&&P.length)return m[w.field]=1,y(P);if(!C)y(P);else{if(w.required&&!b.value)return w.message!==void 0?P=[].concat(w.message).map(OE(w,a)):s.error&&(P=[s.error(w,wo(s.messages.required,w.field))]),y(P);var R={};w.defaultField&&Object.keys(b.value).map(function(z){R[z]=w.defaultField}),R=Z(Z({},R),b.rule.fields);var A={};Object.keys(R).forEach(function(z){var B=R[z],_=Array.isArray(B)?B:[B];A[z]=_.map(S.bind(null,z))});var V=new e(A);V.messages(s.messages),b.rule.options&&(b.rule.options.messages=s.messages,b.rule.options.error=s.error),V.validate(b.value,b.rule.options||s,function(z){var B=[];P&&P.length&&B.push.apply(B,Se(P)),z&&z.length&&B.push.apply(B,Se(z)),y(B.length?B:null)})}}var k;if(w.asyncValidator)k=w.asyncValidator(w,b.value,E,b.source,s);else if(w.validator){try{k=w.validator(w,b.value,E,b.source,s)}catch(T){var O,$;(O=($=console).error)===null||O===void 0||O.call($,T),s.suppressValidatorError||setTimeout(function(){throw T},0),E(T.message)}k===!0?E():k===!1?E(typeof w.message=="function"?w.message(w.fullField||w.field):w.message||"".concat(w.fullField||w.field," fails")):k instanceof Array?E(k):k instanceof Error&&E(k.message)}k&&k.then&&k.then(function(){return E()},function(T){return E(T)})},function(b){u(b)},a)}},{key:"getType",value:function(n){if(n.type===void 0&&n.pattern instanceof RegExp&&(n.type="pattern"),typeof n.validator!="function"&&n.type&&!Cu.hasOwnProperty(n.type))throw new Error(wo("Unknown rule type %s",n.type));return n.type||"string"}},{key:"getValidationMethod",value:function(n){if(typeof n.validator=="function")return n.validator;var r=Object.keys(n),o=r.indexOf("message");return o!==-1&&r.splice(o,1),r.length===1&&r[0]==="required"?Cu.required:Cu[this.getType(n)]||void 0}}]),e}();K(Ed,"register",function(t,n){if(typeof n!="function")throw new Error("Cannot register a validator by type, validator is not a function");Cu[t]=n});K(Ed,"warning",P8);K(Ed,"messages",vb);K(Ed,"validators",Cu);var ho="'${name}' is not a valid ${type}",yT={default:"Validation error on field '${name}'",required:"'${name}' is required",enum:"'${name}' must be one of [${enum}]",whitespace:"'${name}' cannot be empty",date:{format:"'${name}' is invalid for format date",parse:"'${name}' could not be parsed as date",invalid:"'${name}' is invalid date"},types:{string:ho,method:ho,array:ho,object:ho,number:ho,date:ho,boolean:ho,integer:ho,float:ho,regexp:ho,email:ho,url:ho,hex:ho},string:{len:"'${name}' must be exactly ${len} characters",min:"'${name}' must be at least ${min} characters",max:"'${name}' cannot be longer than ${max} characters",range:"'${name}' must be between ${min} and ${max} characters"},number:{len:"'${name}' must equal ${len}",min:"'${name}' cannot be less than ${min}",max:"'${name}' cannot be greater than ${max}",range:"'${name}' must be between ${min} and ${max}"},array:{len:"'${name}' must be exactly ${len} in length",min:"'${name}' cannot be less than ${min} in length",max:"'${name}' cannot be greater than ${max} in length",range:"'${name}' must be between ${min} and ${max} in length"},pattern:{mismatch:"'${name}' does not match pattern ${pattern}"}},TE=Ed;function oH(e,t){return e.replace(/\\?\$\{\w+\}/g,function(n){if(n.startsWith("\\"))return n.slice(1);var r=n.slice(2,-1);return t[r]})}var PE="CODE_LOGIC_ERROR";function mb(e,t,n,r,o){return bb.apply(this,arguments)}function bb(){return bb=yo($n().mark(function e(t,n,r,o,i){var a,s,c,u,p,v,h,m,b;return $n().wrap(function(w){for(;;)switch(w.prev=w.next){case 0:return a=Z({},r),delete a.ruleIndex,TE.warning=function(){},a.validator&&(s=a.validator,a.validator=function(){try{return s.apply(void 0,arguments)}catch{return Promise.reject(PE)}}),c=null,a&&a.type==="array"&&a.defaultField&&(c=a.defaultField,delete a.defaultField),u=new TE(K({},t,[a])),p=Cl(yT,o.validateMessages),u.messages(p),v=[],w.prev=10,w.next=13,Promise.resolve(u.validate(K({},t,n),Z({},o)));case 13:w.next=18;break;case 15:w.prev=15,w.t0=w.catch(10),w.t0.errors&&(v=w.t0.errors.map(function(C,S){var E=C.message,k=E===PE?p.default:E;return d.isValidElement(k)?d.cloneElement(k,{key:"error_".concat(S)}):k}));case 18:if(!(!v.length&&c)){w.next=23;break}return w.next=21,Promise.all(n.map(function(C,S){return mb("".concat(t,".").concat(S),C,c,o,i)}));case 21:return h=w.sent,w.abrupt("return",h.reduce(function(C,S){return[].concat(Se(C),Se(S))},[]));case 23:return m=Z(Z({},r),{},{name:t,enum:(r.enum||[]).join(", ")},i),b=v.map(function(C){return typeof C=="string"?oH(C,m):C}),w.abrupt("return",b);case 26:case"end":return w.stop()}},e,null,[[10,15]])})),bb.apply(this,arguments)}function iH(e,t,n,r,o,i){var a=e.join("."),s=n.map(function(p,v){var h=p.validator,m=Z(Z({},p),{},{ruleIndex:v});return h&&(m.validator=function(b,y,w){var C=!1,S=function(){for(var O=arguments.length,$=new Array(O),T=0;T2&&arguments[2]!==void 0?arguments[2]:!1;return e&&e.some(function(r){return wT(t,r,n)})}function wT(e,t){var n=arguments.length>2&&arguments[2]!==void 0?arguments[2]:!1;return!e||!t||!n&&e.length!==t.length?!1:t.every(function(r,o){return e[o]===r})}function lH(e,t){if(e===t)return!0;if(!e&&t||e&&!t||!e||!t||st(e)!=="object"||st(t)!=="object")return!1;var n=Object.keys(e),r=Object.keys(t),o=new Set([].concat(n,r));return Se(o).every(function(i){var a=e[i],s=t[i];return typeof a=="function"&&typeof s=="function"?!0:a===s})}function cH(e){var t=arguments.length<=1?void 0:arguments[1];return t&&t.target&&st(t.target)==="object"&&e in t.target?t.target[e]:t}function NE(e,t,n){var r=e.length;if(t<0||t>=r||n<0||n>=r)return e;var o=e[t],i=t-n;return i>0?[].concat(Se(e.slice(0,n)),[o],Se(e.slice(n,t)),Se(e.slice(t+1,r))):i<0?[].concat(Se(e.slice(0,t)),Se(e.slice(t+1,n+1)),[o],Se(e.slice(n+1,r))):e}var uH=["name"],Ro=[];function Em(e,t,n,r,o,i){return typeof e=="function"?e(t,n,"source"in i?{source:i.source}:{}):r!==o}var vw=function(e){Co(n,e);var t=Eo(n);function n(r){var o;if(Kn(this,n),o=t.call(this,r),K(Ne(o),"state",{resetCount:0}),K(Ne(o),"cancelRegisterFunc",null),K(Ne(o),"mounted",!1),K(Ne(o),"touched",!1),K(Ne(o),"dirty",!1),K(Ne(o),"validatePromise",void 0),K(Ne(o),"prevValidating",void 0),K(Ne(o),"errors",Ro),K(Ne(o),"warnings",Ro),K(Ne(o),"cancelRegister",function(){var c=o.props,u=c.preserve,p=c.isListField,v=c.name;o.cancelRegisterFunc&&o.cancelRegisterFunc(p,u,rr(v)),o.cancelRegisterFunc=null}),K(Ne(o),"getNamePath",function(){var c=o.props,u=c.name,p=c.fieldContext,v=p.prefixName,h=v===void 0?[]:v;return u!==void 0?[].concat(Se(h),Se(u)):[]}),K(Ne(o),"getRules",function(){var c=o.props,u=c.rules,p=u===void 0?[]:u,v=c.fieldContext;return p.map(function(h){return typeof h=="function"?h(v):h})}),K(Ne(o),"refresh",function(){o.mounted&&o.setState(function(c){var u=c.resetCount;return{resetCount:u+1}})}),K(Ne(o),"metaCache",null),K(Ne(o),"triggerMetaEvent",function(c){var u=o.props.onMetaChange;if(u){var p=Z(Z({},o.getMeta()),{},{destroy:c});zi(o.metaCache,p)||u(p),o.metaCache=p}else o.metaCache=null}),K(Ne(o),"onStoreChange",function(c,u,p){var v=o.props,h=v.shouldUpdate,m=v.dependencies,b=m===void 0?[]:m,y=v.onReset,w=p.store,C=o.getNamePath(),S=o.getValue(c),E=o.getValue(w),k=u&&Pl(u,C);switch(p.type==="valueUpdate"&&p.source==="external"&&!zi(S,E)&&(o.touched=!0,o.dirty=!0,o.validatePromise=null,o.errors=Ro,o.warnings=Ro,o.triggerMetaEvent()),p.type){case"reset":if(!u||k){o.touched=!1,o.dirty=!1,o.validatePromise=void 0,o.errors=Ro,o.warnings=Ro,o.triggerMetaEvent(),y==null||y(),o.refresh();return}break;case"remove":if(h&&Em(h,c,w,S,E,p)){o.reRender();return}break;case"setField":var O=p.data;if(k){"touched"in O&&(o.touched=O.touched),"validating"in O&&!("originRCField"in O)&&(o.validatePromise=O.validating?Promise.resolve([]):null),"errors"in O&&(o.errors=O.errors||Ro),"warnings"in O&&(o.warnings=O.warnings||Ro),o.dirty=!0,o.triggerMetaEvent(),o.reRender();return}else if("value"in O&&Pl(u,C,!0)){o.reRender();return}if(h&&!C.length&&Em(h,c,w,S,E,p)){o.reRender();return}break;case"dependenciesUpdate":var $=b.map(rr);if($.some(function(T){return Pl(p.relatedFields,T)})){o.reRender();return}break;default:if(k||(!b.length||C.length||h)&&Em(h,c,w,S,E,p)){o.reRender();return}break}h===!0&&o.reRender()}),K(Ne(o),"validateRules",function(c){var u=o.getNamePath(),p=o.getValue(),v=c||{},h=v.triggerName,m=v.validateOnly,b=m===void 0?!1:m,y=Promise.resolve().then(yo($n().mark(function w(){var C,S,E,k,O,$,T;return $n().wrap(function(P){for(;;)switch(P.prev=P.next){case 0:if(o.mounted){P.next=2;break}return P.abrupt("return",[]);case 2:if(C=o.props,S=C.validateFirst,E=S===void 0?!1:S,k=C.messageVariables,O=C.validateDebounce,$=o.getRules(),h&&($=$.filter(function(R){return R}).filter(function(R){var A=R.validateTrigger;if(!A)return!0;var V=fb(A);return V.includes(h)})),!(O&&h)){P.next=10;break}return P.next=8,new Promise(function(R){setTimeout(R,O)});case 8:if(o.validatePromise===y){P.next=10;break}return P.abrupt("return",[]);case 10:return T=iH(u,p,$,c,E,k),T.catch(function(R){return R}).then(function(){var R=arguments.length>0&&arguments[0]!==void 0?arguments[0]:Ro;if(o.validatePromise===y){var A;o.validatePromise=null;var V=[],z=[];(A=R.forEach)===null||A===void 0||A.call(R,function(B){var _=B.rule.warningOnly,H=B.errors,j=H===void 0?Ro:H;_?z.push.apply(z,Se(j)):V.push.apply(V,Se(j))}),o.errors=V,o.warnings=z,o.triggerMetaEvent(),o.reRender()}}),P.abrupt("return",T);case 13:case"end":return P.stop()}},w)})));return b||(o.validatePromise=y,o.dirty=!0,o.errors=Ro,o.warnings=Ro,o.triggerMetaEvent(),o.reRender()),y}),K(Ne(o),"isFieldValidating",function(){return!!o.validatePromise}),K(Ne(o),"isFieldTouched",function(){return o.touched}),K(Ne(o),"isFieldDirty",function(){if(o.dirty||o.props.initialValue!==void 0)return!0;var c=o.props.fieldContext,u=c.getInternalHooks(ys),p=u.getInitialValue;return p(o.getNamePath())!==void 0}),K(Ne(o),"getErrors",function(){return o.errors}),K(Ne(o),"getWarnings",function(){return o.warnings}),K(Ne(o),"isListField",function(){return o.props.isListField}),K(Ne(o),"isList",function(){return o.props.isList}),K(Ne(o),"isPreserve",function(){return o.props.preserve}),K(Ne(o),"getMeta",function(){o.prevValidating=o.isFieldValidating();var c={touched:o.isFieldTouched(),validating:o.prevValidating,errors:o.errors,warnings:o.warnings,name:o.getNamePath(),validated:o.validatePromise===null};return c}),K(Ne(o),"getOnlyChild",function(c){if(typeof c=="function"){var u=o.getMeta();return Z(Z({},o.getOnlyChild(c(o.getControlled(),u,o.props.fieldContext))),{},{isFunction:!0})}var p=lo(c);return p.length!==1||!d.isValidElement(p[0])?{child:p,isFunction:!1}:{child:p[0],isFunction:!1}}),K(Ne(o),"getValue",function(c){var u=o.props.fieldContext.getFieldsValue,p=o.getNamePath();return bo(c||u(!0),p)}),K(Ne(o),"getControlled",function(){var c=arguments.length>0&&arguments[0]!==void 0?arguments[0]:{},u=o.props,p=u.name,v=u.trigger,h=u.validateTrigger,m=u.getValueFromEvent,b=u.normalize,y=u.valuePropName,w=u.getValueProps,C=u.fieldContext,S=h!==void 0?h:C.validateTrigger,E=o.getNamePath(),k=C.getInternalHooks,O=C.getFieldsValue,$=k(ys),T=$.dispatch,M=o.getValue(),P=w||function(B){return K({},y,B)},R=c[v],A=p!==void 0?P(M):{},V=Z(Z({},c),A);V[v]=function(){o.touched=!0,o.dirty=!0,o.triggerMetaEvent();for(var B,_=arguments.length,H=new Array(_),j=0;j<_;j++)H[j]=arguments[j];m?B=m.apply(void 0,H):B=cH.apply(void 0,[y].concat(H)),b&&(B=b(B,M,O(!0))),T({type:"updateValue",namePath:E,value:B}),R&&R.apply(void 0,H)};var z=fb(S||[]);return z.forEach(function(B){var _=V[B];V[B]=function(){_&&_.apply(void 0,arguments);var H=o.props.rules;H&&H.length&&T({type:"validateField",namePath:E,triggerName:B})}}),V}),r.fieldContext){var i=r.fieldContext.getInternalHooks,a=i(ys),s=a.initEntityValue;s(Ne(o))}return o}return qn(n,[{key:"componentDidMount",value:function(){var o=this.props,i=o.shouldUpdate,a=o.fieldContext;if(this.mounted=!0,a){var s=a.getInternalHooks,c=s(ys),u=c.registerField;this.cancelRegisterFunc=u(this)}i===!0&&this.reRender()}},{key:"componentWillUnmount",value:function(){this.cancelRegister(),this.triggerMetaEvent(!0),this.mounted=!1}},{key:"reRender",value:function(){this.mounted&&this.forceUpdate()}},{key:"render",value:function(){var o=this.state.resetCount,i=this.props.children,a=this.getOnlyChild(i),s=a.child,c=a.isFunction,u;return c?u=s:d.isValidElement(s)?u=d.cloneElement(s,this.getControlled(s.props)):(Fn(!s,"`children` of Field is not validate ReactElement."),u=s),d.createElement(d.Fragment,{key:o},u)}}]),n}(d.Component);K(vw,"contextType",Ms);K(vw,"defaultProps",{trigger:"onChange",valuePropName:"value"});function hw(e){var t=e.name,n=Mt(e,uH),r=d.useContext(Ms),o=d.useContext(td),i=t!==void 0?rr(t):void 0,a="keep";return n.isListField||(a="_".concat((i||[]).join("_"))),d.createElement(vw,$e({key:a,name:i,isListField:!!o},n,{fieldContext:r}))}function xT(e){var t=e.name,n=e.initialValue,r=e.children,o=e.rules,i=e.validateTrigger,a=e.isListField,s=d.useContext(Ms),c=d.useContext(td),u=d.useRef({keys:[],id:0}),p=u.current,v=d.useMemo(function(){var y=rr(s.prefixName)||[];return[].concat(Se(y),Se(rr(t)))},[s.prefixName,t]),h=d.useMemo(function(){return Z(Z({},s),{},{prefixName:v})},[s,v]),m=d.useMemo(function(){return{getKey:function(w){var C=v.length,S=w[C];return[p.keys[S],w.slice(C+1)]}}},[v]);if(typeof r!="function")return Fn(!1,"Form.List only accepts function as children."),null;var b=function(w,C,S){var E=S.source;return E==="internal"?!1:w!==C};return d.createElement(td.Provider,{value:m},d.createElement(Ms.Provider,{value:h},d.createElement(hw,{name:[],shouldUpdate:b,rules:o,validateTrigger:i,initialValue:n,isList:!0,isListField:a??!!c},function(y,w){var C=y.value,S=C===void 0?[]:C,E=y.onChange,k=s.getFieldValue,O=function(){var P=k(v||[]);return P||[]},$={add:function(P,R){var A=O();R>=0&&R<=A.length?(p.keys=[].concat(Se(p.keys.slice(0,R)),[p.id],Se(p.keys.slice(R))),E([].concat(Se(A.slice(0,R)),[P],Se(A.slice(R))))):(p.keys=[].concat(Se(p.keys),[p.id]),E([].concat(Se(A),[P]))),p.id+=1},remove:function(P){var R=O(),A=new Set(Array.isArray(P)?P:[P]);A.size<=0||(p.keys=p.keys.filter(function(V,z){return!A.has(z)}),E(R.filter(function(V,z){return!A.has(z)})))},move:function(P,R){if(P!==R){var A=O();P<0||P>=A.length||R<0||R>=A.length||(p.keys=NE(p.keys,P,R),E(NE(A,P,R)))}}},T=S||[];return Array.isArray(T)||(T=[]),r(T.map(function(M,P){var R=p.keys[P];return R===void 0&&(p.keys[P]=p.id,R=p.keys[P],p.id+=1),{name:P,key:R,isListField:!0}}),$,w)})))}function dH(e){var t=!1,n=e.length,r=[];return e.length?new Promise(function(o,i){e.forEach(function(a,s){a.catch(function(c){return t=!0,c}).then(function(c){n-=1,r[s]=c,!(n>0)&&(t&&i(r),o(r))})})}):Promise.resolve([])}var ST="__@field_split__";function km(e){return e.map(function(t){return"".concat(st(t),":").concat(t)}).join(ST)}var ul=function(){function e(){Kn(this,e),K(this,"kvs",new Map)}return qn(e,[{key:"set",value:function(n,r){this.kvs.set(km(n),r)}},{key:"get",value:function(n){return this.kvs.get(km(n))}},{key:"update",value:function(n,r){var o=this.get(n),i=r(o);i?this.set(n,i):this.delete(n)}},{key:"delete",value:function(n){this.kvs.delete(km(n))}},{key:"map",value:function(n){return Se(this.kvs.entries()).map(function(r){var o=ve(r,2),i=o[0],a=o[1],s=i.split(ST);return n({key:s.map(function(c){var u=c.match(/^([^:]*):(.*)$/),p=ve(u,3),v=p[1],h=p[2];return v==="number"?Number(h):h}),value:a})})}},{key:"toJSON",value:function(){var n={};return this.map(function(r){var o=r.key,i=r.value;return n[o.join(".")]=i,null}),n}}]),e}(),fH=["name"],pH=qn(function e(t){var n=this;Kn(this,e),K(this,"formHooked",!1),K(this,"forceRootUpdate",void 0),K(this,"subscribable",!0),K(this,"store",{}),K(this,"fieldEntities",[]),K(this,"initialValues",{}),K(this,"callbacks",{}),K(this,"validateMessages",null),K(this,"preserve",null),K(this,"lastValidatePromise",null),K(this,"getForm",function(){return{getFieldValue:n.getFieldValue,getFieldsValue:n.getFieldsValue,getFieldError:n.getFieldError,getFieldWarning:n.getFieldWarning,getFieldsError:n.getFieldsError,isFieldsTouched:n.isFieldsTouched,isFieldTouched:n.isFieldTouched,isFieldValidating:n.isFieldValidating,isFieldsValidating:n.isFieldsValidating,resetFields:n.resetFields,setFields:n.setFields,setFieldValue:n.setFieldValue,setFieldsValue:n.setFieldsValue,validateFields:n.validateFields,submit:n.submit,_init:!0,getInternalHooks:n.getInternalHooks}}),K(this,"getInternalHooks",function(r){return r===ys?(n.formHooked=!0,{dispatch:n.dispatch,initEntityValue:n.initEntityValue,registerField:n.registerField,useSubscribe:n.useSubscribe,setInitialValues:n.setInitialValues,destroyForm:n.destroyForm,setCallbacks:n.setCallbacks,setValidateMessages:n.setValidateMessages,getFields:n.getFields,setPreserve:n.setPreserve,getInitialValue:n.getInitialValue,registerWatch:n.registerWatch}):(Fn(!1,"`getInternalHooks` is internal usage. Should not call directly."),null)}),K(this,"useSubscribe",function(r){n.subscribable=r}),K(this,"prevWithoutPreserves",null),K(this,"setInitialValues",function(r,o){if(n.initialValues=r||{},o){var i,a=Cl(r,n.store);(i=n.prevWithoutPreserves)===null||i===void 0||i.map(function(s){var c=s.key;a=ai(a,c,bo(r,c))}),n.prevWithoutPreserves=null,n.updateStore(a)}}),K(this,"destroyForm",function(r){if(r)n.updateStore({});else{var o=new ul;n.getFieldEntities(!0).forEach(function(i){n.isMergedPreserve(i.isPreserve())||o.set(i.getNamePath(),!0)}),n.prevWithoutPreserves=o}}),K(this,"getInitialValue",function(r){var o=bo(n.initialValues,r);return r.length?Cl(o):o}),K(this,"setCallbacks",function(r){n.callbacks=r}),K(this,"setValidateMessages",function(r){n.validateMessages=r}),K(this,"setPreserve",function(r){n.preserve=r}),K(this,"watchList",[]),K(this,"registerWatch",function(r){return n.watchList.push(r),function(){n.watchList=n.watchList.filter(function(o){return o!==r})}}),K(this,"notifyWatch",function(){var r=arguments.length>0&&arguments[0]!==void 0?arguments[0]:[];if(n.watchList.length){var o=n.getFieldsValue(),i=n.getFieldsValue(!0);n.watchList.forEach(function(a){a(o,i,r)})}}),K(this,"timeoutId",null),K(this,"warningUnhooked",function(){}),K(this,"updateStore",function(r){n.store=r}),K(this,"getFieldEntities",function(){var r=arguments.length>0&&arguments[0]!==void 0?arguments[0]:!1;return r?n.fieldEntities.filter(function(o){return o.getNamePath().length}):n.fieldEntities}),K(this,"getFieldsMap",function(){var r=arguments.length>0&&arguments[0]!==void 0?arguments[0]:!1,o=new ul;return n.getFieldEntities(r).forEach(function(i){var a=i.getNamePath();o.set(a,i)}),o}),K(this,"getFieldEntitiesForNamePathList",function(r){if(!r)return n.getFieldEntities(!0);var o=n.getFieldsMap(!0);return r.map(function(i){var a=rr(i);return o.get(a)||{INVALIDATE_NAME_PATH:rr(i)}})}),K(this,"getFieldsValue",function(r,o){n.warningUnhooked();var i,a,s;if(r===!0||Array.isArray(r)?(i=r,a=o):r&&st(r)==="object"&&(s=r.strict,a=r.filter),i===!0&&!a)return n.store;var c=n.getFieldEntitiesForNamePathList(Array.isArray(i)?i:null),u=[];return c.forEach(function(p){var v,h,m="INVALIDATE_NAME_PATH"in p?p.INVALIDATE_NAME_PATH:p.getNamePath();if(s){var b,y;if((b=(y=p).isList)!==null&&b!==void 0&&b.call(y))return}else if(!i&&(v=(h=p).isListField)!==null&&v!==void 0&&v.call(h))return;if(!a)u.push(m);else{var w="getMeta"in p?p.getMeta():null;a(w)&&u.push(m)}}),ME(n.store,u.map(rr))}),K(this,"getFieldValue",function(r){n.warningUnhooked();var o=rr(r);return bo(n.store,o)}),K(this,"getFieldsError",function(r){n.warningUnhooked();var o=n.getFieldEntitiesForNamePathList(r);return o.map(function(i,a){return i&&!("INVALIDATE_NAME_PATH"in i)?{name:i.getNamePath(),errors:i.getErrors(),warnings:i.getWarnings()}:{name:rr(r[a]),errors:[],warnings:[]}})}),K(this,"getFieldError",function(r){n.warningUnhooked();var o=rr(r),i=n.getFieldsError([o])[0];return i.errors}),K(this,"getFieldWarning",function(r){n.warningUnhooked();var o=rr(r),i=n.getFieldsError([o])[0];return i.warnings}),K(this,"isFieldsTouched",function(){n.warningUnhooked();for(var r=arguments.length,o=new Array(r),i=0;i0&&arguments[0]!==void 0?arguments[0]:{},o=new ul,i=n.getFieldEntities(!0);i.forEach(function(c){var u=c.props.initialValue,p=c.getNamePath();if(u!==void 0){var v=o.get(p)||new Set;v.add({entity:c,value:u}),o.set(p,v)}});var a=function(u){u.forEach(function(p){var v=p.props.initialValue;if(v!==void 0){var h=p.getNamePath(),m=n.getInitialValue(h);if(m!==void 0)Fn(!1,"Form already set 'initialValues' with path '".concat(h.join("."),"'. Field can not overwrite it."));else{var b=o.get(h);if(b&&b.size>1)Fn(!1,"Multiple Field with path '".concat(h.join("."),"' set 'initialValue'. Can not decide which one to pick."));else if(b){var y=n.getFieldValue(h),w=p.isListField();!w&&(!r.skipExist||y===void 0)&&n.updateStore(ai(n.store,h,Se(b)[0].value))}}}})},s;r.entities?s=r.entities:r.namePathList?(s=[],r.namePathList.forEach(function(c){var u=o.get(c);if(u){var p;(p=s).push.apply(p,Se(Se(u).map(function(v){return v.entity})))}})):s=i,a(s)}),K(this,"resetFields",function(r){n.warningUnhooked();var o=n.store;if(!r){n.updateStore(Cl(n.initialValues)),n.resetWithFieldInitialValue(),n.notifyObservers(o,null,{type:"reset"}),n.notifyWatch();return}var i=r.map(rr);i.forEach(function(a){var s=n.getInitialValue(a);n.updateStore(ai(n.store,a,s))}),n.resetWithFieldInitialValue({namePathList:i}),n.notifyObservers(o,i,{type:"reset"}),n.notifyWatch(i)}),K(this,"setFields",function(r){n.warningUnhooked();var o=n.store,i=[];r.forEach(function(a){var s=a.name,c=Mt(a,fH),u=rr(s);i.push(u),"value"in c&&n.updateStore(ai(n.store,u,c.value)),n.notifyObservers(o,[u],{type:"setField",data:a})}),n.notifyWatch(i)}),K(this,"getFields",function(){var r=n.getFieldEntities(!0),o=r.map(function(i){var a=i.getNamePath(),s=i.getMeta(),c=Z(Z({},s),{},{name:a,value:n.getFieldValue(a)});return Object.defineProperty(c,"originRCField",{value:!0}),c});return o}),K(this,"initEntityValue",function(r){var o=r.props.initialValue;if(o!==void 0){var i=r.getNamePath(),a=bo(n.store,i);a===void 0&&n.updateStore(ai(n.store,i,o))}}),K(this,"isMergedPreserve",function(r){var o=r!==void 0?r:n.preserve;return o??!0}),K(this,"registerField",function(r){n.fieldEntities.push(r);var o=r.getNamePath();if(n.notifyWatch([o]),r.props.initialValue!==void 0){var i=n.store;n.resetWithFieldInitialValue({entities:[r],skipExist:!0}),n.notifyObservers(i,[r.getNamePath()],{type:"valueUpdate",source:"internal"})}return function(a,s){var c=arguments.length>2&&arguments[2]!==void 0?arguments[2]:[];if(n.fieldEntities=n.fieldEntities.filter(function(v){return v!==r}),!n.isMergedPreserve(s)&&(!a||c.length>1)){var u=a?void 0:n.getInitialValue(o);if(o.length&&n.getFieldValue(o)!==u&&n.fieldEntities.every(function(v){return!wT(v.getNamePath(),o)})){var p=n.store;n.updateStore(ai(p,o,u,!0)),n.notifyObservers(p,[o],{type:"remove"}),n.triggerDependenciesUpdate(p,o)}}n.notifyWatch([o])}}),K(this,"dispatch",function(r){switch(r.type){case"updateValue":var o=r.namePath,i=r.value;n.updateValue(o,i);break;case"validateField":var a=r.namePath,s=r.triggerName;n.validateFields([a],{triggerName:s});break}}),K(this,"notifyObservers",function(r,o,i){if(n.subscribable){var a=Z(Z({},i),{},{store:n.getFieldsValue(!0)});n.getFieldEntities().forEach(function(s){var c=s.onStoreChange;c(r,o,a)})}else n.forceRootUpdate()}),K(this,"triggerDependenciesUpdate",function(r,o){var i=n.getDependencyChildrenFields(o);return i.length&&n.validateFields(i),n.notifyObservers(r,i,{type:"dependenciesUpdate",relatedFields:[o].concat(Se(i))}),i}),K(this,"updateValue",function(r,o){var i=rr(r),a=n.store;n.updateStore(ai(n.store,i,o)),n.notifyObservers(a,[i],{type:"valueUpdate",source:"internal"}),n.notifyWatch([i]);var s=n.triggerDependenciesUpdate(a,i),c=n.callbacks.onValuesChange;if(c){var u=ME(n.store,[i]);c(u,n.getFieldsValue())}n.triggerOnFieldsChange([i].concat(Se(s)))}),K(this,"setFieldsValue",function(r){n.warningUnhooked();var o=n.store;if(r){var i=Cl(n.store,r);n.updateStore(i)}n.notifyObservers(o,null,{type:"valueUpdate",source:"external"}),n.notifyWatch()}),K(this,"setFieldValue",function(r,o){n.setFields([{name:r,value:o}])}),K(this,"getDependencyChildrenFields",function(r){var o=new Set,i=[],a=new ul;n.getFieldEntities().forEach(function(c){var u=c.props.dependencies;(u||[]).forEach(function(p){var v=rr(p);a.update(v,function(){var h=arguments.length>0&&arguments[0]!==void 0?arguments[0]:new Set;return h.add(c),h})})});var s=function c(u){var p=a.get(u)||new Set;p.forEach(function(v){if(!o.has(v)){o.add(v);var h=v.getNamePath();v.isFieldDirty()&&h.length&&(i.push(h),c(h))}})};return s(r),i}),K(this,"triggerOnFieldsChange",function(r,o){var i=n.callbacks.onFieldsChange;if(i){var a=n.getFields();if(o){var s=new ul;o.forEach(function(u){var p=u.name,v=u.errors;s.set(p,v)}),a.forEach(function(u){u.errors=s.get(u.name)||u.errors})}var c=a.filter(function(u){var p=u.name;return Pl(r,p)});c.length&&i(c,a)}}),K(this,"validateFields",function(r,o){n.warningUnhooked();var i,a;Array.isArray(r)||typeof r=="string"||typeof o=="string"?(i=r,a=o):a=r;var s=!!i,c=s?i.map(rr):[],u=[],p=String(Date.now()),v=new Set,h=a||{},m=h.recursive,b=h.dirty;n.getFieldEntities(!0).forEach(function(S){if(s||c.push(S.getNamePath()),!(!S.props.rules||!S.props.rules.length)&&!(b&&!S.isFieldDirty())){var E=S.getNamePath();if(v.add(E.join(p)),!s||Pl(c,E,m)){var k=S.validateRules(Z({validateMessages:Z(Z({},yT),n.validateMessages)},a));u.push(k.then(function(){return{name:E,errors:[],warnings:[]}}).catch(function(O){var $,T=[],M=[];return($=O.forEach)===null||$===void 0||$.call(O,function(P){var R=P.rule.warningOnly,A=P.errors;R?M.push.apply(M,Se(A)):T.push.apply(T,Se(A))}),T.length?Promise.reject({name:E,errors:T,warnings:M}):{name:E,errors:T,warnings:M}}))}}});var y=dH(u);n.lastValidatePromise=y,y.catch(function(S){return S}).then(function(S){var E=S.map(function(k){var O=k.name;return O});n.notifyObservers(n.store,E,{type:"validateFinish"}),n.triggerOnFieldsChange(E,S)});var w=y.then(function(){return n.lastValidatePromise===y?Promise.resolve(n.getFieldsValue(c)):Promise.reject([])}).catch(function(S){var E=S.filter(function(k){return k&&k.errors.length});return Promise.reject({values:n.getFieldsValue(c),errorFields:E,outOfDate:n.lastValidatePromise!==y})});w.catch(function(S){return S});var C=c.filter(function(S){return v.has(S.join(p))});return n.triggerOnFieldsChange(C),w}),K(this,"submit",function(){n.warningUnhooked(),n.validateFields().then(function(r){var o=n.callbacks.onFinish;if(o)try{o(r)}catch{}}).catch(function(r){var o=n.callbacks.onFinishFailed;o&&o(r)})}),this.forceRootUpdate=t});function gw(e){var t=d.useRef(),n=d.useState({}),r=ve(n,2),o=r[1];if(!t.current)if(e)t.current=e;else{var i=function(){o({})},a=new pH(i);t.current=a.getForm()}return[t.current]}var xb=d.createContext({triggerFormChange:function(){},triggerFormFinish:function(){},registerForm:function(){},unregisterForm:function(){}}),CT=function(t){var n=t.validateMessages,r=t.onFormChange,o=t.onFormFinish,i=t.children,a=d.useContext(xb),s=d.useRef({});return d.createElement(xb.Provider,{value:Z(Z({},a),{},{validateMessages:Z(Z({},a.validateMessages),n),triggerFormChange:function(u,p){r&&r(u,{changedFields:p,forms:s.current}),a.triggerFormChange(u,p)},triggerFormFinish:function(u,p){o&&o(u,{values:p,forms:s.current}),a.triggerFormFinish(u,p)},registerForm:function(u,p){u&&(s.current=Z(Z({},s.current),{},K({},u,p))),a.registerForm(u,p)},unregisterForm:function(u){var p=Z({},s.current);delete p[u],s.current=p,a.unregisterForm(u)}})},i)},vH=["name","initialValues","fields","form","preserve","children","component","validateMessages","validateTrigger","onValuesChange","onFieldsChange","onFinish","onFinishFailed","clearOnDestroy"],hH=function(t,n){var r=t.name,o=t.initialValues,i=t.fields,a=t.form,s=t.preserve,c=t.children,u=t.component,p=u===void 0?"form":u,v=t.validateMessages,h=t.validateTrigger,m=h===void 0?"onChange":h,b=t.onValuesChange,y=t.onFieldsChange,w=t.onFinish,C=t.onFinishFailed,S=t.clearOnDestroy,E=Mt(t,vH),k=d.useRef(null),O=d.useContext(xb),$=gw(a),T=ve($,1),M=T[0],P=M.getInternalHooks(ys),R=P.useSubscribe,A=P.setInitialValues,V=P.setCallbacks,z=P.setValidateMessages,B=P.setPreserve,_=P.destroyForm;d.useImperativeHandle(n,function(){return Z(Z({},M),{},{nativeElement:k.current})}),d.useEffect(function(){return O.registerForm(r,M),function(){O.unregisterForm(r)}},[O,M,r]),z(Z(Z({},O.validateMessages),v)),V({onValuesChange:b,onFieldsChange:function(q){if(O.triggerFormChange(r,q),y){for(var J=arguments.length,Y=new Array(J>1?J-1:0),Q=1;Q{}}),kT=d.createContext(null),OT=e=>{const t=Ln(e,["prefixCls"]);return d.createElement(CT,Object.assign({},t))},mw=d.createContext({prefixCls:""}),Vr=d.createContext({}),mH=e=>{let{children:t,status:n,override:r}=e;const o=d.useContext(Vr),i=d.useMemo(()=>{const a=Object.assign({},o);return r&&delete a.isFormItemInput,n&&(delete a.status,delete a.hasFeedback,delete a.feedbackIcon),a},[n,r,o]);return d.createElement(Vr.Provider,{value:i},t)},$T=d.createContext(void 0),nd=e=>{const{space:t,form:n,children:r}=e;if(r==null)return null;let o=r;return n&&(o=ue.createElement(mH,{override:!0,status:!0},o)),t&&(o=ue.createElement(q5,null,o)),o};function Zp(e){if(e)return{closable:e.closable,closeIcon:e.closeIcon}}function DE(e){const{closable:t,closeIcon:n}=e||{};return ue.useMemo(()=>{if(!t&&(t===!1||n===!1||n===null))return!1;if(t===void 0&&n===void 0)return null;let r={closeIcon:typeof n!="boolean"&&n!==null?n:void 0};return t&&typeof t=="object"&&(r=Object.assign(Object.assign({},r),t)),r},[t,n])}function jE(){const e={};for(var t=arguments.length,n=new Array(t),r=0;r{o&&Object.keys(o).forEach(i=>{o[i]!==void 0&&(e[i]=o[i])})}),e}const bH={};function IT(e,t){let n=arguments.length>2&&arguments[2]!==void 0?arguments[2]:bH;const r=DE(e),o=DE(t),i=typeof r!="boolean"?!!(r!=null&&r.disabled):!1,a=ue.useMemo(()=>Object.assign({closeIcon:ue.createElement(yd,null)},n),[n]),s=ue.useMemo(()=>r===!1?!1:r?jE(a,o,r):o===!1?!1:o?jE(a,o):a.closable?a:!1,[r,o,a]);return ue.useMemo(()=>{if(s===!1)return[!1,null,i];const{closeIconRender:c}=a,{closeIcon:u}=s;let p=u;if(p!=null){c&&(p=c(u));const v=Gr(s,!0);Object.keys(v).length&&(p=ue.isValidElement(p)?ue.cloneElement(p,v):ue.createElement("span",Object.assign({},v),p))}return[!0,p,i]},[s,a])}var TT=function(t){if($r()&&window.document.documentElement){var n=Array.isArray(t)?t:[t],r=window.document.documentElement;return n.some(function(o){return o in r.style})}return!1},yH=function(t,n){if(!TT(t))return!1;var r=document.createElement("div"),o=r.style[t];return r.style[t]=n,r.style[t]!==o};function Sb(e,t){return!Array.isArray(e)&&t!==void 0?yH(e,t):TT(e)}const wH=()=>$r()&&window.document.documentElement,Vv=e=>{const{prefixCls:t,className:n,style:r,size:o,shape:i}=e,a=ie({[`${t}-lg`]:o==="large",[`${t}-sm`]:o==="small"}),s=ie({[`${t}-circle`]:i==="circle",[`${t}-square`]:i==="square",[`${t}-round`]:i==="round"}),c=d.useMemo(()=>typeof o=="number"?{width:o,height:o,lineHeight:`${o}px`}:{},[o]);return d.createElement("span",{className:ie(t,a,s,n),style:Object.assign(Object.assign({},c),r)})},xH=new fn("ant-skeleton-loading",{"0%":{backgroundPosition:"100% 50%"},"100%":{backgroundPosition:"0 50%"}}),Wv=e=>({height:e,lineHeight:de(e)}),Ml=e=>Object.assign({width:e},Wv(e)),SH=e=>({background:e.skeletonLoadingBackground,backgroundSize:"400% 100%",animationName:xH,animationDuration:e.skeletonLoadingMotionDuration,animationTimingFunction:"ease",animationIterationCount:"infinite"}),Om=(e,t)=>Object.assign({width:t(e).mul(5).equal(),minWidth:t(e).mul(5).equal()},Wv(e)),CH=e=>{const{skeletonAvatarCls:t,gradientFromColor:n,controlHeight:r,controlHeightLG:o,controlHeightSM:i}=e;return{[t]:Object.assign({display:"inline-block",verticalAlign:"top",background:n},Ml(r)),[`${t}${t}-circle`]:{borderRadius:"50%"},[`${t}${t}-lg`]:Object.assign({},Ml(o)),[`${t}${t}-sm`]:Object.assign({},Ml(i))}},EH=e=>{const{controlHeight:t,borderRadiusSM:n,skeletonInputCls:r,controlHeightLG:o,controlHeightSM:i,gradientFromColor:a,calc:s}=e;return{[r]:Object.assign({display:"inline-block",verticalAlign:"top",background:a,borderRadius:n},Om(t,s)),[`${r}-lg`]:Object.assign({},Om(o,s)),[`${r}-sm`]:Object.assign({},Om(i,s))}},LE=e=>Object.assign({width:e},Wv(e)),kH=e=>{const{skeletonImageCls:t,imageSizeBase:n,gradientFromColor:r,borderRadiusSM:o,calc:i}=e;return{[t]:Object.assign(Object.assign({display:"inline-flex",alignItems:"center",justifyContent:"center",verticalAlign:"middle",background:r,borderRadius:o},LE(i(n).mul(2).equal())),{[`${t}-path`]:{fill:"#bfbfbf"},[`${t}-svg`]:Object.assign(Object.assign({},LE(n)),{maxWidth:i(n).mul(4).equal(),maxHeight:i(n).mul(4).equal()}),[`${t}-svg${t}-svg-circle`]:{borderRadius:"50%"}}),[`${t}${t}-circle`]:{borderRadius:"50%"}}},$m=(e,t,n)=>{const{skeletonButtonCls:r}=e;return{[`${n}${r}-circle`]:{width:t,minWidth:t,borderRadius:"50%"},[`${n}${r}-round`]:{borderRadius:t}}},Im=(e,t)=>Object.assign({width:t(e).mul(2).equal(),minWidth:t(e).mul(2).equal()},Wv(e)),OH=e=>{const{borderRadiusSM:t,skeletonButtonCls:n,controlHeight:r,controlHeightLG:o,controlHeightSM:i,gradientFromColor:a,calc:s}=e;return Object.assign(Object.assign(Object.assign(Object.assign(Object.assign({[n]:Object.assign({display:"inline-block",verticalAlign:"top",background:a,borderRadius:t,width:s(r).mul(2).equal(),minWidth:s(r).mul(2).equal()},Im(r,s))},$m(e,r,n)),{[`${n}-lg`]:Object.assign({},Im(o,s))}),$m(e,o,`${n}-lg`)),{[`${n}-sm`]:Object.assign({},Im(i,s))}),$m(e,i,`${n}-sm`))},$H=e=>{const{componentCls:t,skeletonAvatarCls:n,skeletonTitleCls:r,skeletonParagraphCls:o,skeletonButtonCls:i,skeletonInputCls:a,skeletonImageCls:s,controlHeight:c,controlHeightLG:u,controlHeightSM:p,gradientFromColor:v,padding:h,marginSM:m,borderRadius:b,titleHeight:y,blockRadius:w,paragraphLiHeight:C,controlHeightXS:S,paragraphMarginTop:E}=e;return{[t]:{display:"table",width:"100%",[`${t}-header`]:{display:"table-cell",paddingInlineEnd:h,verticalAlign:"top",[n]:Object.assign({display:"inline-block",verticalAlign:"top",background:v},Ml(c)),[`${n}-circle`]:{borderRadius:"50%"},[`${n}-lg`]:Object.assign({},Ml(u)),[`${n}-sm`]:Object.assign({},Ml(p))},[`${t}-content`]:{display:"table-cell",width:"100%",verticalAlign:"top",[r]:{width:"100%",height:y,background:v,borderRadius:w,[`+ ${o}`]:{marginBlockStart:p}},[o]:{padding:0,"> li":{width:"100%",height:C,listStyle:"none",background:v,borderRadius:w,"+ li":{marginBlockStart:S}}},[`${o}> li:last-child:not(:first-child):not(:nth-child(2))`]:{width:"61%"}},[`&-round ${t}-content`]:{[`${r}, ${o} > li`]:{borderRadius:b}}},[`${t}-with-avatar ${t}-content`]:{[r]:{marginBlockStart:m,[`+ ${o}`]:{marginBlockStart:E}}},[`${t}${t}-element`]:Object.assign(Object.assign(Object.assign(Object.assign({display:"inline-block",width:"auto"},OH(e)),CH(e)),EH(e)),kH(e)),[`${t}${t}-block`]:{width:"100%",[i]:{width:"100%"},[a]:{width:"100%"}},[`${t}${t}-active`]:{[` - ${r}, - ${o} > li, - ${n}, - ${i}, - ${a}, - ${s} - `]:Object.assign({},SH(e))}}},IH=e=>{const{colorFillContent:t,colorFill:n}=e,r=t,o=n;return{color:r,colorGradientEnd:o,gradientFromColor:r,gradientToColor:o,titleHeight:e.controlHeight/2,blockRadius:e.borderRadiusSM,paragraphMarginTop:e.marginLG+e.marginXXS,paragraphLiHeight:e.controlHeight/2}},uc=In("Skeleton",e=>{const{componentCls:t,calc:n}=e,r=vn(e,{skeletonAvatarCls:`${t}-avatar`,skeletonTitleCls:`${t}-title`,skeletonParagraphCls:`${t}-paragraph`,skeletonButtonCls:`${t}-button`,skeletonInputCls:`${t}-input`,skeletonImageCls:`${t}-image`,imageSizeBase:n(e.controlHeight).mul(1.5).equal(),borderRadius:100,skeletonLoadingBackground:`linear-gradient(90deg, ${e.gradientFromColor} 25%, ${e.gradientToColor} 37%, ${e.gradientFromColor} 63%)`,skeletonLoadingMotionDuration:"1.4s"});return[$H(r)]},IH,{deprecatedTokens:[["color","gradientFromColor"],["colorGradientEnd","gradientToColor"]]}),TH=e=>{const{prefixCls:t,className:n,rootClassName:r,active:o,shape:i="circle",size:a="default"}=e,{getPrefixCls:s}=d.useContext(ht),c=s("skeleton",t),[u,p,v]=uc(c),h=Ln(e,["prefixCls","className"]),m=ie(c,`${c}-element`,{[`${c}-active`]:o},n,r,p,v);return u(d.createElement("div",{className:m},d.createElement(Vv,Object.assign({prefixCls:`${c}-avatar`,shape:i,size:a},h))))},PH=e=>{const{prefixCls:t,className:n,rootClassName:r,active:o,block:i=!1,size:a="default"}=e,{getPrefixCls:s}=d.useContext(ht),c=s("skeleton",t),[u,p,v]=uc(c),h=Ln(e,["prefixCls"]),m=ie(c,`${c}-element`,{[`${c}-active`]:o,[`${c}-block`]:i},n,r,p,v);return u(d.createElement("div",{className:m},d.createElement(Vv,Object.assign({prefixCls:`${c}-button`,size:a},h))))},MH="M365.714286 329.142857q0 45.714286-32.036571 77.677714t-77.677714 32.036571-77.677714-32.036571-32.036571-77.677714 32.036571-77.677714 77.677714-32.036571 77.677714 32.036571 32.036571 77.677714zM950.857143 548.571429l0 256-804.571429 0 0-109.714286 182.857143-182.857143 91.428571 91.428571 292.571429-292.571429zM1005.714286 146.285714l-914.285714 0q-7.460571 0-12.873143 5.412571t-5.412571 12.873143l0 694.857143q0 7.460571 5.412571 12.873143t12.873143 5.412571l914.285714 0q7.460571 0 12.873143-5.412571t5.412571-12.873143l0-694.857143q0-7.460571-5.412571-12.873143t-12.873143-5.412571zM1097.142857 164.571429l0 694.857143q0 37.741714-26.843429 64.585143t-64.585143 26.843429l-914.285714 0q-37.741714 0-64.585143-26.843429t-26.843429-64.585143l0-694.857143q0-37.741714 26.843429-64.585143t64.585143-26.843429l914.285714 0q37.741714 0 64.585143 26.843429t26.843429 64.585143z",NH=e=>{const{prefixCls:t,className:n,rootClassName:r,style:o,active:i}=e,{getPrefixCls:a}=d.useContext(ht),s=a("skeleton",t),[c,u,p]=uc(s),v=ie(s,`${s}-element`,{[`${s}-active`]:i},n,r,u,p);return c(d.createElement("div",{className:v},d.createElement("div",{className:ie(`${s}-image`,n),style:o},d.createElement("svg",{viewBox:"0 0 1098 1024",xmlns:"http://www.w3.org/2000/svg",className:`${s}-image-svg`},d.createElement("title",null,"Image placeholder"),d.createElement("path",{d:MH,className:`${s}-image-path`})))))},RH=e=>{const{prefixCls:t,className:n,rootClassName:r,active:o,block:i,size:a="default"}=e,{getPrefixCls:s}=d.useContext(ht),c=s("skeleton",t),[u,p,v]=uc(c),h=Ln(e,["prefixCls"]),m=ie(c,`${c}-element`,{[`${c}-active`]:o,[`${c}-block`]:i},n,r,p,v);return u(d.createElement("div",{className:m},d.createElement(Vv,Object.assign({prefixCls:`${c}-input`,size:a},h))))},DH=e=>{const{prefixCls:t,className:n,rootClassName:r,style:o,active:i,children:a}=e,{getPrefixCls:s}=d.useContext(ht),c=s("skeleton",t),[u,p,v]=uc(c),h=ie(c,`${c}-element`,{[`${c}-active`]:i},p,n,r,v);return u(d.createElement("div",{className:h},d.createElement("div",{className:ie(`${c}-image`,n),style:o},a)))},jH=(e,t)=>{const{width:n,rows:r=2}=t;if(Array.isArray(n))return n[e];if(r-1===e)return n},LH=e=>{const{prefixCls:t,className:n,style:r,rows:o}=e,i=Se(new Array(o)).map((a,s)=>d.createElement("li",{key:s,style:{width:jH(s,e)}}));return d.createElement("ul",{className:ie(t,n),style:r},i)},BH=e=>{let{prefixCls:t,className:n,width:r,style:o}=e;return d.createElement("h3",{className:ie(t,n),style:Object.assign({width:r},o)})};function Tm(e){return e&&typeof e=="object"?e:{}}function AH(e,t){return e&&!t?{size:"large",shape:"square"}:{size:"large",shape:"circle"}}function zH(e,t){return!e&&t?{width:"38%"}:e&&t?{width:"50%"}:{}}function HH(e,t){const n={};return(!e||!t)&&(n.width="61%"),!e&&t?n.rows=3:n.rows=2,n}const dc=e=>{const{prefixCls:t,loading:n,className:r,rootClassName:o,style:i,children:a,avatar:s=!1,title:c=!0,paragraph:u=!0,active:p,round:v}=e,{getPrefixCls:h,direction:m,skeleton:b}=d.useContext(ht),y=h("skeleton",t),[w,C,S]=uc(y);if(n||!("loading"in e)){const E=!!s,k=!!c,O=!!u;let $;if(E){const P=Object.assign(Object.assign({prefixCls:`${y}-avatar`},AH(k,O)),Tm(s));$=d.createElement("div",{className:`${y}-header`},d.createElement(Vv,Object.assign({},P)))}let T;if(k||O){let P;if(k){const A=Object.assign(Object.assign({prefixCls:`${y}-title`},zH(E,O)),Tm(c));P=d.createElement(BH,Object.assign({},A))}let R;if(O){const A=Object.assign(Object.assign({prefixCls:`${y}-paragraph`},HH(E,k)),Tm(u));R=d.createElement(LH,Object.assign({},A))}T=d.createElement("div",{className:`${y}-content`},P,R)}const M=ie(y,{[`${y}-with-avatar`]:E,[`${y}-active`]:p,[`${y}-rtl`]:m==="rtl",[`${y}-round`]:v},b==null?void 0:b.className,r,o,C,S);return w(d.createElement("div",{className:M,style:Object.assign(Object.assign({},b==null?void 0:b.style),i)},$,T))}return a??null};dc.Button=PH;dc.Avatar=TH;dc.Input=RH;dc.Image=NH;dc.Node=DH;function BE(){}const FH=d.createContext({add:BE,remove:BE});function _H(e){const t=d.useContext(FH),n=d.useRef();return gn(o=>{if(o){const i=e?o.querySelector(e):o;t.add(i),n.current=i}else t.remove(n.current)})}const AE=()=>{const{cancelButtonProps:e,cancelTextLocale:t,onCancel:n}=d.useContext(Cd);return ue.createElement(jr,Object.assign({onClick:n},e),t)},zE=()=>{const{confirmLoading:e,okButtonProps:t,okType:n,okTextLocale:r,onOk:o}=d.useContext(Cd);return ue.createElement(jr,Object.assign({},ew(n),{loading:e,onClick:o},t),r)};function PT(e,t){return ue.createElement("span",{className:`${e}-close-x`},t||ue.createElement(yd,{className:`${e}-close-icon`}))}const MT=e=>{const{okText:t,okType:n="primary",cancelText:r,confirmLoading:o,onOk:i,onCancel:a,okButtonProps:s,cancelButtonProps:c,footer:u}=e,[p]=bi("Modal",mI()),v=t||(p==null?void 0:p.okText),h=r||(p==null?void 0:p.cancelText),m={confirmLoading:o,okButtonProps:s,cancelButtonProps:c,okTextLocale:v,cancelTextLocale:h,okType:n,onOk:i,onCancel:a},b=ue.useMemo(()=>m,Se(Object.values(m)));let y;return typeof u=="function"||typeof u>"u"?(y=ue.createElement(ue.Fragment,null,ue.createElement(AE,null),ue.createElement(zE,null)),typeof u=="function"&&(y=u(y,{OkBtn:zE,CancelBtn:AE})),y=ue.createElement(uT,{value:b},y)):y=u,ue.createElement(Gy,{disabled:!1},y)};function HE(e){return{position:e,inset:0}}const VH=e=>{const{componentCls:t,antCls:n}=e;return[{[`${t}-root`]:{[`${t}${n}-zoom-enter, ${t}${n}-zoom-appear`]:{transform:"none",opacity:0,animationDuration:e.motionDurationSlow,userSelect:"none"},[`${t}${n}-zoom-leave ${t}-content`]:{pointerEvents:"none"},[`${t}-mask`]:Object.assign(Object.assign({},HE("fixed")),{zIndex:e.zIndexPopupBase,height:"100%",backgroundColor:e.colorBgMask,pointerEvents:"none",[`${t}-hidden`]:{display:"none"}}),[`${t}-wrap`]:Object.assign(Object.assign({},HE("fixed")),{zIndex:e.zIndexPopupBase,overflow:"auto",outline:0,WebkitOverflowScrolling:"touch"})}},{[`${t}-root`]:aT(e)}]},WH=e=>{const{componentCls:t}=e;return[{[`${t}-root`]:{[`${t}-wrap-rtl`]:{direction:"rtl"},[`${t}-centered`]:{textAlign:"center","&::before":{display:"inline-block",width:0,height:"100%",verticalAlign:"middle",content:'""'},[t]:{top:0,display:"inline-block",paddingBottom:0,textAlign:"start",verticalAlign:"middle"}},[`@media (max-width: ${e.screenSMMax}px)`]:{[t]:{maxWidth:"calc(100vw - 16px)",margin:`${de(e.marginXS)} auto`},[`${t}-centered`]:{[t]:{flex:1}}}}},{[t]:Object.assign(Object.assign({},jn(e)),{pointerEvents:"none",position:"relative",top:100,width:"auto",maxWidth:`calc(100vw - ${de(e.calc(e.margin).mul(2).equal())})`,margin:"0 auto",paddingBottom:e.paddingLG,[`${t}-title`]:{margin:0,color:e.titleColor,fontWeight:e.fontWeightStrong,fontSize:e.titleFontSize,lineHeight:e.titleLineHeight,wordWrap:"break-word"},[`${t}-content`]:{position:"relative",backgroundColor:e.contentBg,backgroundClip:"padding-box",border:0,borderRadius:e.borderRadiusLG,boxShadow:e.boxShadow,pointerEvents:"auto",padding:e.contentPadding},[`${t}-close`]:Object.assign({position:"absolute",top:e.calc(e.modalHeaderHeight).sub(e.modalCloseBtnSize).div(2).equal(),insetInlineEnd:e.calc(e.modalHeaderHeight).sub(e.modalCloseBtnSize).div(2).equal(),zIndex:e.calc(e.zIndexPopupBase).add(10).equal(),padding:0,color:e.modalCloseIconColor,fontWeight:e.fontWeightStrong,lineHeight:1,textDecoration:"none",background:"transparent",borderRadius:e.borderRadiusSM,width:e.modalCloseBtnSize,height:e.modalCloseBtnSize,border:0,outline:0,cursor:"pointer",transition:`color ${e.motionDurationMid}, background-color ${e.motionDurationMid}`,"&-x":{display:"flex",fontSize:e.fontSizeLG,fontStyle:"normal",lineHeight:de(e.modalCloseBtnSize),justifyContent:"center",textTransform:"none",textRendering:"auto"},"&:disabled":{pointerEvents:"none"},"&:hover":{color:e.modalCloseIconHoverColor,backgroundColor:e.colorBgTextHover,textDecoration:"none"},"&:active":{backgroundColor:e.colorBgTextActive}},Xl(e)),[`${t}-header`]:{color:e.colorText,background:e.headerBg,borderRadius:`${de(e.borderRadiusLG)} ${de(e.borderRadiusLG)} 0 0`,marginBottom:e.headerMarginBottom,padding:e.headerPadding,borderBottom:e.headerBorderBottom},[`${t}-body`]:{fontSize:e.fontSize,lineHeight:e.lineHeight,wordWrap:"break-word",padding:e.bodyPadding,[`${t}-body-skeleton`]:{width:"100%",height:"100%",display:"flex",justifyContent:"center",alignItems:"center",margin:`${de(e.margin)} auto`}},[`${t}-footer`]:{textAlign:"end",background:e.footerBg,marginTop:e.footerMarginTop,padding:e.footerPadding,borderTop:e.footerBorderTop,borderRadius:e.footerBorderRadius,[`> ${e.antCls}-btn + ${e.antCls}-btn`]:{marginInlineStart:e.marginXS}},[`${t}-open`]:{overflow:"hidden"}})},{[`${t}-pure-panel`]:{top:"auto",padding:0,display:"flex",flexDirection:"column",[`${t}-content, - ${t}-body, - ${t}-confirm-body-wrapper`]:{display:"flex",flexDirection:"column",flex:"auto"},[`${t}-confirm-body`]:{marginBottom:"auto"}}}]},UH=e=>{const{componentCls:t}=e;return{[`${t}-root`]:{[`${t}-wrap-rtl`]:{direction:"rtl",[`${t}-confirm-body`]:{direction:"rtl"}}}}},NT=e=>{const t=e.padding,n=e.fontSizeHeading5,r=e.lineHeightHeading5;return vn(e,{modalHeaderHeight:e.calc(e.calc(r).mul(n).equal()).add(e.calc(t).mul(2).equal()).equal(),modalFooterBorderColorSplit:e.colorSplit,modalFooterBorderStyle:e.lineType,modalFooterBorderWidth:e.lineWidth,modalCloseIconColor:e.colorIcon,modalCloseIconHoverColor:e.colorIconHover,modalCloseBtnSize:e.controlHeight,modalConfirmIconSize:e.fontHeight,modalTitleHeight:e.calc(e.titleFontSize).mul(e.titleLineHeight).equal()})},RT=e=>({footerBg:"transparent",headerBg:e.colorBgElevated,titleLineHeight:e.lineHeightHeading5,titleFontSize:e.fontSizeHeading5,contentBg:e.colorBgElevated,titleColor:e.colorTextHeading,contentPadding:e.wireframe?0:`${de(e.paddingMD)} ${de(e.paddingContentHorizontalLG)}`,headerPadding:e.wireframe?`${de(e.padding)} ${de(e.paddingLG)}`:0,headerBorderBottom:e.wireframe?`${de(e.lineWidth)} ${e.lineType} ${e.colorSplit}`:"none",headerMarginBottom:e.wireframe?0:e.marginXS,bodyPadding:e.wireframe?e.paddingLG:0,footerPadding:e.wireframe?`${de(e.paddingXS)} ${de(e.padding)}`:0,footerBorderTop:e.wireframe?`${de(e.lineWidth)} ${e.lineType} ${e.colorSplit}`:"none",footerBorderRadius:e.wireframe?`0 0 ${de(e.borderRadiusLG)} ${de(e.borderRadiusLG)}`:0,footerMarginTop:e.wireframe?0:e.marginSM,confirmBodyPadding:e.wireframe?`${de(e.padding*2)} ${de(e.padding*2)} ${de(e.paddingLG)}`:0,confirmIconMarginInlineEnd:e.wireframe?e.margin:e.marginSM,confirmBtnsMarginTop:e.wireframe?e.marginLG:e.marginSM}),DT=In("Modal",e=>{const t=NT(e);return[WH(t),UH(t),VH(t),Sd(t,"zoom")]},RT,{unitless:{titleLineHeight:!0}});var KH=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{Cb={x:e.pageX,y:e.pageY},setTimeout(()=>{Cb=null},100)};wH()&&document.documentElement.addEventListener("click",qH,!0);const jT=e=>{var t;const{getPopupContainer:n,getPrefixCls:r,direction:o,modal:i}=d.useContext(ht),a=G=>{const{onCancel:q}=e;q==null||q(G)},s=G=>{const{onOk:q}=e;q==null||q(G)},{prefixCls:c,className:u,rootClassName:p,open:v,wrapClassName:h,centered:m,getContainer:b,focusTriggerAfterClose:y=!0,style:w,visible:C,width:S=520,footer:E,classNames:k,styles:O,children:$,loading:T}=e,M=KH(e,["prefixCls","className","rootClassName","open","wrapClassName","centered","getContainer","focusTriggerAfterClose","style","visible","width","footer","classNames","styles","children","loading"]),P=r("modal",c),R=r(),A=br(P),[V,z,B]=DT(P,A),_=ie(h,{[`${P}-centered`]:!!m,[`${P}-wrap-rtl`]:o==="rtl"}),H=E!==null&&!T?d.createElement(MT,Object.assign({},e,{onOk:s,onCancel:a})):null,[j,L,F]=IT(Zp(e),Zp(i),{closable:!0,closeIcon:d.createElement(yd,{className:`${P}-close-icon`}),closeIconRender:G=>PT(P,G)}),U=_H(`.${P}-content`),[D,W]=sc("Modal",M.zIndex);return V(d.createElement(nd,{form:!0,space:!0},d.createElement(Rv.Provider,{value:W},d.createElement(mT,Object.assign({width:S},M,{zIndex:D,getContainer:b===void 0?n:b,prefixCls:P,rootClassName:ie(z,p,B,A),footer:H,visible:v??C,mousePosition:(t=M.mousePosition)!==null&&t!==void 0?t:Cb,onClose:a,closable:j&&{disabled:F,closeIcon:L},closeIcon:L,focusTriggerAfterClose:y,transitionName:ra(R,"zoom",e.transitionName),maskTransitionName:ra(R,"fade",e.maskTransitionName),className:ie(z,u,i==null?void 0:i.className),style:Object.assign(Object.assign({},i==null?void 0:i.style),w),classNames:Object.assign(Object.assign(Object.assign({},i==null?void 0:i.classNames),k),{wrapper:ie(_,k==null?void 0:k.wrapper)}),styles:Object.assign(Object.assign({},i==null?void 0:i.styles),O),panelRef:U}),T?d.createElement(dc,{active:!0,title:!1,paragraph:{rows:4},className:`${P}-body-skeleton`}):$))))},XH=e=>{const{componentCls:t,titleFontSize:n,titleLineHeight:r,modalConfirmIconSize:o,fontSize:i,lineHeight:a,modalTitleHeight:s,fontHeight:c,confirmBodyPadding:u}=e,p=`${t}-confirm`;return{[p]:{"&-rtl":{direction:"rtl"},[`${e.antCls}-modal-header`]:{display:"none"},[`${p}-body-wrapper`]:Object.assign({},Ps()),[`&${t} ${t}-body`]:{padding:u},[`${p}-body`]:{display:"flex",flexWrap:"nowrap",alignItems:"start",[`> ${e.iconCls}`]:{flex:"none",fontSize:o,marginInlineEnd:e.confirmIconMarginInlineEnd,marginTop:e.calc(e.calc(c).sub(o).equal()).div(2).equal()},[`&-has-title > ${e.iconCls}`]:{marginTop:e.calc(e.calc(s).sub(o).equal()).div(2).equal()}},[`${p}-paragraph`]:{display:"flex",flexDirection:"column",flex:"auto",rowGap:e.marginXS},[`${e.iconCls} + ${p}-paragraph`]:{maxWidth:`calc(100% - ${de(e.calc(e.modalConfirmIconSize).add(e.marginSM).equal())})`},[`${p}-title`]:{color:e.colorTextHeading,fontWeight:e.fontWeightStrong,fontSize:n,lineHeight:r},[`${p}-content`]:{color:e.colorText,fontSize:i,lineHeight:a},[`${p}-btns`]:{textAlign:"end",marginTop:e.confirmBtnsMarginTop,[`${e.antCls}-btn + ${e.antCls}-btn`]:{marginBottom:0,marginInlineStart:e.marginXS}}},[`${p}-error ${p}-body > ${e.iconCls}`]:{color:e.colorError},[`${p}-warning ${p}-body > ${e.iconCls}, - ${p}-confirm ${p}-body > ${e.iconCls}`]:{color:e.colorWarning},[`${p}-info ${p}-body > ${e.iconCls}`]:{color:e.colorInfo},[`${p}-success ${p}-body > ${e.iconCls}`]:{color:e.colorSuccess}}},GH=ic(["Modal","confirm"],e=>{const t=NT(e);return[XH(t)]},RT,{order:-1e3});var YH=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);oS,Se(Object.values(S))),k=d.createElement(d.Fragment,null,d.createElement(vE,null),d.createElement(hE,null)),O=e.title!==void 0&&e.title!==null,$=`${i}-body`;return d.createElement("div",{className:`${i}-body-wrapper`},d.createElement("div",{className:ie($,{[`${$}-has-title`]:O})},v,d.createElement("div",{className:`${i}-paragraph`},O&&d.createElement("span",{className:`${i}-title`},e.title),d.createElement("div",{className:`${i}-content`},e.content))),c===void 0||typeof c=="function"?d.createElement(uT,{value:E},d.createElement("div",{className:`${i}-btns`},typeof c=="function"?c(k,{OkBtn:hE,CancelBtn:vE}):k)):c,d.createElement(GH,{prefixCls:t}))}const QH=e=>{const{close:t,zIndex:n,afterClose:r,open:o,keyboard:i,centered:a,getContainer:s,maskStyle:c,direction:u,prefixCls:p,wrapClassName:v,rootPrefixCls:h,bodyStyle:m,closable:b=!1,closeIcon:y,modalRender:w,focusTriggerAfterClose:C,onConfirm:S,styles:E}=e,k=`${p}-confirm`,O=e.width||416,$=e.style||{},T=e.mask===void 0?!0:e.mask,M=e.maskClosable===void 0?!1:e.maskClosable,P=ie(k,`${k}-${e.type}`,{[`${k}-rtl`]:u==="rtl"},e.className),[,R]=Ir(),A=d.useMemo(()=>n!==void 0?n:R.zIndexPopupBase+k5,[n,R]);return d.createElement(jT,{prefixCls:p,className:P,wrapClassName:ie({[`${k}-centered`]:!!e.centered},v),onCancel:()=>{t==null||t({triggerCancel:!0}),S==null||S(!1)},open:o,title:"",footer:null,transitionName:ra(h||"","zoom",e.transitionName),maskTransitionName:ra(h||"","fade",e.maskTransitionName),mask:T,maskClosable:M,style:$,styles:Object.assign({body:m,mask:c},E),width:O,zIndex:A,afterClose:r,keyboard:i,centered:a,getContainer:s,closable:b,closeIcon:y,modalRender:w,focusTriggerAfterClose:C},d.createElement(LT,Object.assign({},e,{confirmPrefixCls:k})))},BT=e=>{const{rootPrefixCls:t,iconPrefixCls:n,direction:r,theme:o}=e;return d.createElement(la,{prefixCls:t,iconPrefixCls:n,direction:r,theme:o},d.createElement(QH,Object.assign({},e)))},ws=[];let AT="";function zT(){return AT}const ZH=e=>{var t,n;const{prefixCls:r,getContainer:o,direction:i}=e,a=mI(),s=d.useContext(ht),c=zT()||s.getPrefixCls(),u=r||`${c}-modal`;let p=o;return p===!1&&(p=void 0),ue.createElement(BT,Object.assign({},e,{rootPrefixCls:c,prefixCls:u,iconPrefixCls:s.iconPrefixCls,theme:s.theme,direction:i??s.direction,locale:(n=(t=s.locale)===null||t===void 0?void 0:t.Modal)!==null&&n!==void 0?n:a,getContainer:p}))};function kd(e){const t=_A(),n=document.createDocumentFragment();let r=Object.assign(Object.assign({},e),{close:s,open:!0}),o;function i(){for(var u,p=arguments.length,v=new Array(p),h=0;hy==null?void 0:y.triggerCancel)){var b;(u=e.onCancel)===null||u===void 0||(b=u).call.apply(b,[e,()=>{}].concat(Se(v.slice(1))))}for(let y=0;y{const p=t.getPrefixCls(void 0,zT()),v=t.getIconPrefixCls(),h=t.getTheme(),m=ue.createElement(ZH,Object.assign({},u));eT(ue.createElement(la,{prefixCls:p,iconPrefixCls:v,theme:h},t.holderRender?t.holderRender(m):m),n)})}function s(){for(var u=arguments.length,p=new Array(u),v=0;v{typeof e.afterClose=="function"&&e.afterClose(),i.apply(this,p)}}),r.visible&&delete r.visible,a(r)}function c(u){typeof u=="function"?r=u(r):r=Object.assign(Object.assign({},r),u),a(r)}return a(r),ws.push(s),{destroy:s,update:c}}function HT(e){return Object.assign(Object.assign({},e),{type:"warning"})}function FT(e){return Object.assign(Object.assign({},e),{type:"info"})}function _T(e){return Object.assign(Object.assign({},e),{type:"success"})}function VT(e){return Object.assign(Object.assign({},e),{type:"error"})}function WT(e){return Object.assign(Object.assign({},e),{type:"confirm"})}function JH(e){let{rootPrefixCls:t}=e;AT=t}var eF=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{var n,{afterClose:r,config:o}=e,i=eF(e,["afterClose","config"]);const[a,s]=d.useState(!0),[c,u]=d.useState(o),{direction:p,getPrefixCls:v}=d.useContext(ht),h=v("modal"),m=v(),b=()=>{var S;r(),(S=c.afterClose)===null||S===void 0||S.call(c)},y=function(){var S;s(!1);for(var E=arguments.length,k=new Array(E),O=0;OM==null?void 0:M.triggerCancel)){var T;(S=c.onCancel)===null||S===void 0||(T=S).call.apply(T,[c,()=>{}].concat(Se(k.slice(1))))}};d.useImperativeHandle(t,()=>({destroy:y,update:S=>{u(E=>Object.assign(Object.assign({},E),S))}}));const w=(n=c.okCancel)!==null&&n!==void 0?n:c.type==="confirm",[C]=bi("Modal",hi.Modal);return d.createElement(BT,Object.assign({prefixCls:h,rootPrefixCls:m},c,{close:y,open:a,afterClose:b,okText:c.okText||(w?C==null?void 0:C.okText:C==null?void 0:C.justOkText),direction:c.direction||p,cancelText:c.cancelText||(C==null?void 0:C.cancelText)},i))},nF=d.forwardRef(tF);let FE=0;const rF=d.memo(d.forwardRef((e,t)=>{const[n,r]=I5();return d.useImperativeHandle(t,()=>({patchElement:r}),[]),d.createElement(d.Fragment,null,n)}));function oF(){const e=d.useRef(null),[t,n]=d.useState([]);d.useEffect(()=>{t.length&&(Se(t).forEach(a=>{a()}),n([]))},[t]);const r=d.useCallback(i=>function(s){var c;FE+=1;const u=d.createRef();let p;const v=new Promise(w=>{p=w});let h=!1,m;const b=d.createElement(nF,{key:`modal-${FE}`,config:i(s),ref:u,afterClose:()=>{m==null||m()},isSilent:()=>h,onConfirm:w=>{p(w)}});return m=(c=e.current)===null||c===void 0?void 0:c.patchElement(b),m&&ws.push(m),{destroy:()=>{function w(){var C;(C=u.current)===null||C===void 0||C.destroy()}u.current?w():n(C=>[].concat(Se(C),[w]))},update:w=>{function C(){var S;(S=u.current)===null||S===void 0||S.update(w)}u.current?C():n(S=>[].concat(Se(S),[C]))},then:w=>(h=!0,v.then(w))}},[]);return[d.useMemo(()=>({info:r(FT),success:r(_T),error:r(VT),warning:r(HT),confirm:r(WT)}),[]),d.createElement(rF,{key:"modal-holder",ref:e})]}function UT(e){return t=>d.createElement(la,{theme:{token:{motion:!1,zIndexPopupBase:0}}},d.createElement(e,Object.assign({},t)))}const bw=(e,t,n,r)=>UT(i=>{const{prefixCls:a,style:s}=i,c=d.useRef(null),[u,p]=d.useState(0),[v,h]=d.useState(0),[m,b]=Dn(!1,{value:i.open}),{getPrefixCls:y}=d.useContext(ht),w=y(t||"select",a);d.useEffect(()=>{if(b(!0),typeof ResizeObserver<"u"){const E=new ResizeObserver(O=>{const $=O[0].target;p($.offsetHeight+8),h($.offsetWidth)}),k=setInterval(()=>{var O;const $=n?`.${n(w)}`:`.${w}-dropdown`,T=(O=c.current)===null||O===void 0?void 0:O.querySelector($);T&&(clearInterval(k),E.observe(T))},10);return()=>{clearInterval(k),E.disconnect()}}},[]);let C=Object.assign(Object.assign({},i),{style:Object.assign(Object.assign({},s),{margin:0}),open:m,visible:m,getPopupContainer:()=>c.current});r&&(C=r(C));const S={paddingBottom:u,position:"relative",minWidth:v};return d.createElement("div",{ref:c,style:S},d.createElement(e,Object.assign({},C)))}),KT=function(){if(typeof navigator>"u"||typeof window>"u")return!1;var e=navigator.userAgent||navigator.vendor||window.opera;return/(android|bb\d+|meego).+mobile|avantgo|bada\/|blackberry|blazer|compal|elaine|fennec|hiptop|iemobile|ip(hone|od)|iris|kindle|lge |maemo|midp|mmp|mobile.+firefox|netfront|opera m(ob|in)i|palm( os)?|phone|p(ixi|re)\/|plucker|pocket|psp|series(4|6)0|symbian|treo|up\.(browser|link)|vodafone|wap|windows ce|xda|xiino|android|ipad|playbook|silk/i.test(e)||/1207|6310|6590|3gso|4thp|50[1-6]i|770s|802s|a wa|abac|ac(er|oo|s-)|ai(ko|rn)|al(av|ca|co)|amoi|an(ex|ny|yw)|aptu|ar(ch|go)|as(te|us)|attw|au(di|-m|r |s )|avan|be(ck|ll|nq)|bi(lb|rd)|bl(ac|az)|br(e|v)w|bumb|bw-(n|u)|c55\/|capi|ccwa|cdm-|cell|chtm|cldc|cmd-|co(mp|nd)|craw|da(it|ll|ng)|dbte|dc-s|devi|dica|dmob|do(c|p)o|ds(12|-d)|el(49|ai)|em(l2|ul)|er(ic|k0)|esl8|ez([4-7]0|os|wa|ze)|fetc|fly(-|_)|g1 u|g560|gene|gf-5|g-mo|go(\.w|od)|gr(ad|un)|haie|hcit|hd-(m|p|t)|hei-|hi(pt|ta)|hp( i|ip)|hs-c|ht(c(-| |_|a|g|p|s|t)|tp)|hu(aw|tc)|i-(20|go|ma)|i230|iac( |-|\/)|ibro|idea|ig01|ikom|im1k|inno|ipaq|iris|ja(t|v)a|jbro|jemu|jigs|kddi|keji|kgt( |\/)|klon|kpt |kwc-|kyo(c|k)|le(no|xi)|lg( g|\/(k|l|u)|50|54|-[a-w])|libw|lynx|m1-w|m3ga|m50\/|ma(te|ui|xo)|mc(01|21|ca)|m-cr|me(rc|ri)|mi(o8|oa|ts)|mmef|mo(01|02|bi|de|do|t(-| |o|v)|zz)|mt(50|p1|v )|mwbp|mywa|n10[0-2]|n20[2-3]|n30(0|2)|n50(0|2|5)|n7(0(0|1)|10)|ne((c|m)-|on|tf|wf|wg|wt)|nok(6|i)|nzph|o2im|op(ti|wv)|oran|owg1|p800|pan(a|d|t)|pdxg|pg(13|-([1-8]|c))|phil|pire|pl(ay|uc)|pn-2|po(ck|rt|se)|prox|psio|pt-g|qa-a|qc(07|12|21|32|60|-[2-7]|i-)|qtek|r380|r600|raks|rim9|ro(ve|zo)|s55\/|sa(ge|ma|mm|ms|ny|va)|sc(01|h-|oo|p-)|sdk\/|se(c(-|0|1)|47|mc|nd|ri)|sgh-|shar|sie(-|m)|sk-0|sl(45|id)|sm(al|ar|b3|it|t5)|so(ft|ny)|sp(01|h-|v-|v )|sy(01|mb)|t2(18|50)|t6(00|10|18)|ta(gt|lk)|tcl-|tdg-|tel(i|m)|tim-|t-mo|to(pl|sh)|ts(70|m-|m3|m5)|tx-9|up(\.b|g1|si)|utst|v400|v750|veri|vi(rg|te)|vk(40|5[0-3]|-v)|vm40|voda|vulc|vx(52|53|60|61|70|80|81|83|85|98)|w3c(-| )|webc|whit|wi(g |nc|nw)|wmlb|wonu|x700|yas-|your|zeto|zte-/i.test(e==null?void 0:e.substr(0,4))};var Uv=function(t){var n=t.className,r=t.customizeIcon,o=t.customizeIconProps,i=t.children,a=t.onMouseDown,s=t.onClick,c=typeof r=="function"?r(o):r;return d.createElement("span",{className:n,onMouseDown:function(p){p.preventDefault(),a==null||a(p)},style:{userSelect:"none",WebkitUserSelect:"none"},unselectable:"on",onClick:s,"aria-hidden":!0},c!==void 0?c:d.createElement("span",{className:ie(n.split(/\s+/).map(function(u){return"".concat(u,"-icon")}))},i))},iF=function(t,n,r,o,i){var a=arguments.length>5&&arguments[5]!==void 0?arguments[5]:!1,s=arguments.length>6?arguments[6]:void 0,c=arguments.length>7?arguments[7]:void 0,u=ue.useMemo(function(){if(st(o)==="object")return o.clearIcon;if(i)return i},[o,i]),p=ue.useMemo(function(){return!!(!a&&o&&(r.length||s)&&!(c==="combobox"&&s===""))},[o,a,r.length,s,c]);return{allowClear:p,clearIcon:ue.createElement(Uv,{className:"".concat(t,"-clear"),onMouseDown:n,customizeIcon:u},"×")}},qT=d.createContext(null);function XT(){return d.useContext(qT)}function aF(){var e=arguments.length>0&&arguments[0]!==void 0?arguments[0]:10,t=d.useState(!1),n=ve(t,2),r=n[0],o=n[1],i=d.useRef(null),a=function(){window.clearTimeout(i.current)};d.useEffect(function(){return a},[]);var s=function(u,p){a(),i.current=window.setTimeout(function(){o(u),p&&p()},e)};return[r,s,a]}function GT(){var e=arguments.length>0&&arguments[0]!==void 0?arguments[0]:250,t=d.useRef(null),n=d.useRef(null);d.useEffect(function(){return function(){window.clearTimeout(n.current)}},[]);function r(o){(o||t.current===null)&&(t.current=o),window.clearTimeout(n.current),n.current=window.setTimeout(function(){t.current=null},e)}return[function(){return t.current},r]}function sF(e,t,n,r){var o=d.useRef(null);o.current={open:t,triggerOpen:n,customizedTrigger:r},d.useEffect(function(){function i(a){var s;if(!((s=o.current)!==null&&s!==void 0&&s.customizedTrigger)){var c=a.target;c.shadowRoot&&a.composed&&(c=a.composedPath()[0]||c),o.current.open&&e().filter(function(u){return u}).every(function(u){return!u.contains(c)&&u!==c})&&o.current.triggerOpen(!1)}}return window.addEventListener("mousedown",i),function(){return window.removeEventListener("mousedown",i)}},[])}function lF(e){return![De.ESC,De.SHIFT,De.BACKSPACE,De.TAB,De.WIN_KEY,De.ALT,De.META,De.WIN_KEY_RIGHT,De.CTRL,De.SEMICOLON,De.EQUALS,De.CAPS_LOCK,De.CONTEXT_MENU,De.F1,De.F2,De.F3,De.F4,De.F5,De.F6,De.F7,De.F8,De.F9,De.F10,De.F11,De.F12].includes(e)}var cF=["prefixCls","invalidate","item","renderItem","responsive","responsiveDisabled","registerSize","itemKey","className","style","children","display","order","component"],dl=void 0;function uF(e,t){var n=e.prefixCls,r=e.invalidate,o=e.item,i=e.renderItem,a=e.responsive,s=e.responsiveDisabled,c=e.registerSize,u=e.itemKey,p=e.className,v=e.style,h=e.children,m=e.display,b=e.order,y=e.component,w=y===void 0?"div":y,C=Mt(e,cF),S=a&&!m;function E(M){c(u,M)}d.useEffect(function(){return function(){E(null)}},[]);var k=i&&o!==dl?i(o):h,O;r||(O={opacity:S?0:1,height:S?0:dl,overflowY:S?"hidden":dl,order:a?b:dl,pointerEvents:S?"none":dl,position:S?"absolute":dl});var $={};S&&($["aria-hidden"]=!0);var T=d.createElement(w,$e({className:ie(!r&&n,p),style:Z(Z({},O),v)},$,C,{ref:t}),k);return a&&(T=d.createElement(qo,{onResize:function(P){var R=P.offsetWidth;E(R)},disabled:s},T)),T}var Eu=d.forwardRef(uF);Eu.displayName="Item";function dF(e){if(typeof MessageChannel>"u")bn(e);else{var t=new MessageChannel;t.port1.onmessage=function(){return e()},t.port2.postMessage(void 0)}}function fF(){var e=d.useRef(null),t=function(r){e.current||(e.current=[],dF(function(){pi.unstable_batchedUpdates(function(){e.current.forEach(function(o){o()}),e.current=null})})),e.current.push(r)};return t}function Jc(e,t){var n=d.useState(t),r=ve(n,2),o=r[0],i=r[1],a=gn(function(s){e(function(){i(s)})});return[o,a]}var Jp=ue.createContext(null),pF=["component"],vF=["className"],hF=["className"],gF=function(t,n){var r=d.useContext(Jp);if(!r){var o=t.component,i=o===void 0?"div":o,a=Mt(t,pF);return d.createElement(i,$e({},a,{ref:n}))}var s=r.className,c=Mt(r,vF),u=t.className,p=Mt(t,hF);return d.createElement(Jp.Provider,{value:null},d.createElement(Eu,$e({ref:n,className:ie(s,u)},c,p)))},YT=d.forwardRef(gF);YT.displayName="RawItem";var mF=["prefixCls","data","renderItem","renderRawItem","itemKey","itemWidth","ssr","style","className","maxCount","renderRest","renderRawRest","suffix","component","itemComponent","onVisibleChange"],QT="responsive",ZT="invalidate";function bF(e){return"+ ".concat(e.length," ...")}function yF(e,t){var n=e.prefixCls,r=n===void 0?"rc-overflow":n,o=e.data,i=o===void 0?[]:o,a=e.renderItem,s=e.renderRawItem,c=e.itemKey,u=e.itemWidth,p=u===void 0?10:u,v=e.ssr,h=e.style,m=e.className,b=e.maxCount,y=e.renderRest,w=e.renderRawRest,C=e.suffix,S=e.component,E=S===void 0?"div":S,k=e.itemComponent,O=e.onVisibleChange,$=Mt(e,mF),T=v==="full",M=fF(),P=Jc(M,null),R=ve(P,2),A=R[0],V=R[1],z=A||0,B=Jc(M,new Map),_=ve(B,2),H=_[0],j=_[1],L=Jc(M,0),F=ve(L,2),U=F[0],D=F[1],W=Jc(M,0),G=ve(W,2),q=G[0],J=G[1],Y=Jc(M,0),Q=ve(Y,2),te=Q[0],ce=Q[1],se=d.useState(null),ne=ve(se,2),ae=ne[0],ee=ne[1],re=d.useState(null),le=ve(re,2),pe=le[0],Oe=le[1],ge=d.useMemo(function(){return pe===null&&T?Number.MAX_SAFE_INTEGER:pe||0},[pe,A]),Re=d.useState(!1),ye=ve(Re,2),Te=ye[0],Ae=ye[1],me="".concat(r,"-item"),Ie=Math.max(U,q),Le=b===QT,Be=i.length&&Le,et=b===ZT,rt=Be||typeof b=="number"&&i.length>b,Ze=d.useMemo(function(){var Je=i;return Be?A===null&&T?Je=i:Je=i.slice(0,Math.min(i.length,z/p)):typeof b=="number"&&(Je=i.slice(0,b)),Je},[i,p,A,b,Be]),Ve=d.useMemo(function(){return Be?i.slice(ge+1):i.slice(Ze.length)},[i,Ze,Be,ge]),Ye=d.useCallback(function(Je,He){var We;return typeof c=="function"?c(Je):(We=c&&(Je==null?void 0:Je[c]))!==null&&We!==void 0?We:He},[c]),Ge=d.useCallback(a||function(Je){return Je},[a]);function Fe(Je,He,We){pe===Je&&(He===void 0||He===ae)||(Oe(Je),We||(Ae(Jez){Fe(Et-1,Je-wt-te+q);break}}C&&Ke(0)+te>z&&ee(null)}},[z,H,q,te,Ye,Ze]);var St=Te&&!!Ve.length,Ft={};ae!==null&&Be&&(Ft={position:"absolute",left:ae,top:0});var Lt={prefixCls:me,responsive:Be,component:k,invalidate:et},Ct=s?function(Je,He){var We=Ye(Je,He);return d.createElement(Jp.Provider,{key:We,value:Z(Z({},Lt),{},{order:He,item:Je,itemKey:We,registerSize:ze,display:He<=ge})},s(Je,He))}:function(Je,He){var We=Ye(Je,He);return d.createElement(Eu,$e({},Lt,{order:He,key:We,item:Je,renderItem:Ge,itemKey:We,registerSize:ze,display:He<=ge}))},Xt,Pt={order:St?ge:Number.MAX_SAFE_INTEGER,className:"".concat(me,"-rest"),registerSize:Me,display:St};if(w)w&&(Xt=d.createElement(Jp.Provider,{value:Z(Z({},Lt),Pt)},w(Ve)));else{var Gt=y||bF;Xt=d.createElement(Eu,$e({},Lt,Pt),typeof Gt=="function"?Gt(Ve):Gt)}var ft=d.createElement(E,$e({className:ie(!et&&r,m),style:h,ref:t},$),Ze.map(Ct),rt?Xt:null,C&&d.createElement(Eu,$e({},Lt,{responsive:Le,responsiveDisabled:!Be,order:ge,className:"".concat(me,"-suffix"),registerSize:Pe,display:!0,style:Ft}),C));return Le&&(ft=d.createElement(qo,{onResize:we,disabled:!Be},ft)),ft}var Di=d.forwardRef(yF);Di.displayName="Overflow";Di.Item=YT;Di.RESPONSIVE=QT;Di.INVALIDATE=ZT;var wF=function(t,n){var r,o=t.prefixCls,i=t.id,a=t.inputElement,s=t.disabled,c=t.tabIndex,u=t.autoFocus,p=t.autoComplete,v=t.editable,h=t.activeDescendantId,m=t.value,b=t.maxLength,y=t.onKeyDown,w=t.onMouseDown,C=t.onChange,S=t.onPaste,E=t.onCompositionStart,k=t.onCompositionEnd,O=t.open,$=t.attrs,T=a||d.createElement("input",null),M=T,P=M.ref,R=M.props,A=R.onKeyDown,V=R.onChange,z=R.onMouseDown,B=R.onCompositionStart,_=R.onCompositionEnd,H=R.style;return"maxLength"in T.props,T=d.cloneElement(T,Z(Z(Z({type:"search"},R),{},{id:i,ref:Wr(n,P),disabled:s,tabIndex:c,autoComplete:p||"off",autoFocus:u,className:ie("".concat(o,"-selection-search-input"),(r=T)===null||r===void 0||(r=r.props)===null||r===void 0?void 0:r.className),role:"combobox","aria-expanded":O||!1,"aria-haspopup":"listbox","aria-owns":"".concat(i,"_list"),"aria-autocomplete":"list","aria-controls":"".concat(i,"_list"),"aria-activedescendant":O?h:void 0},$),{},{value:v?m:"",maxLength:b,readOnly:!v,unselectable:v?null:"on",style:Z(Z({},H),{},{opacity:v?null:0}),onKeyDown:function(L){y(L),A&&A(L)},onMouseDown:function(L){w(L),z&&z(L)},onChange:function(L){C(L),V&&V(L)},onCompositionStart:function(L){E(L),B&&B(L)},onCompositionEnd:function(L){k(L),_&&_(L)},onPaste:S})),T},JT=d.forwardRef(wF);function eP(e){return Array.isArray(e)?e:e!==void 0?[e]:[]}var xF=typeof window<"u"&&window.document&&window.document.documentElement,SF=xF;function CF(e){return e!=null}function EF(e){return!e&&e!==0}function _E(e){return["string","number"].includes(st(e))}function tP(e){var t=void 0;return e&&(_E(e.title)?t=e.title.toString():_E(e.label)&&(t=e.label.toString())),t}function kF(e,t){SF?d.useLayoutEffect(e,t):d.useEffect(e,t)}function OF(e){var t;return(t=e.key)!==null&&t!==void 0?t:e.value}var VE=function(t){t.preventDefault(),t.stopPropagation()},$F=function(t){var n=t.id,r=t.prefixCls,o=t.values,i=t.open,a=t.searchValue,s=t.autoClearSearchValue,c=t.inputRef,u=t.placeholder,p=t.disabled,v=t.mode,h=t.showSearch,m=t.autoFocus,b=t.autoComplete,y=t.activeDescendantId,w=t.tabIndex,C=t.removeIcon,S=t.maxTagCount,E=t.maxTagTextLength,k=t.maxTagPlaceholder,O=k===void 0?function(ee){return"+ ".concat(ee.length," ...")}:k,$=t.tagRender,T=t.onToggleOpen,M=t.onRemove,P=t.onInputChange,R=t.onInputPaste,A=t.onInputKeyDown,V=t.onInputMouseDown,z=t.onInputCompositionStart,B=t.onInputCompositionEnd,_=d.useRef(null),H=d.useState(0),j=ve(H,2),L=j[0],F=j[1],U=d.useState(!1),D=ve(U,2),W=D[0],G=D[1],q="".concat(r,"-selection"),J=i||v==="multiple"&&s===!1||v==="tags"?a:"",Y=v==="tags"||v==="multiple"&&s===!1||h&&(i||W);kF(function(){F(_.current.scrollWidth)},[J]);var Q=function(re,le,pe,Oe,ge){return d.createElement("span",{title:tP(re),className:ie("".concat(q,"-item"),K({},"".concat(q,"-item-disabled"),pe))},d.createElement("span",{className:"".concat(q,"-item-content")},le),Oe&&d.createElement(Uv,{className:"".concat(q,"-item-remove"),onMouseDown:VE,onClick:ge,customizeIcon:C},"×"))},te=function(re,le,pe,Oe,ge,Re){var ye=function(Ae){VE(Ae),T(!i)};return d.createElement("span",{onMouseDown:ye},$({label:le,value:re,disabled:pe,closable:Oe,onClose:ge,isMaxTag:!!Re}))},ce=function(re){var le=re.disabled,pe=re.label,Oe=re.value,ge=!p&&!le,Re=pe;if(typeof E=="number"&&(typeof pe=="string"||typeof pe=="number")){var ye=String(Re);ye.length>E&&(Re="".concat(ye.slice(0,E),"..."))}var Te=function(me){me&&me.stopPropagation(),M(re)};return typeof $=="function"?te(Oe,Re,le,ge,Te):Q(re,Re,le,ge,Te)},se=function(re){var le=typeof O=="function"?O(re):O;return typeof $=="function"?te(void 0,le,!1,!1,void 0,!0):Q({title:le},le,!1)},ne=d.createElement("div",{className:"".concat(q,"-search"),style:{width:L},onFocus:function(){G(!0)},onBlur:function(){G(!1)}},d.createElement(JT,{ref:c,open:i,prefixCls:r,id:n,inputElement:null,disabled:p,autoFocus:m,autoComplete:b,editable:Y,activeDescendantId:y,value:J,onKeyDown:A,onMouseDown:V,onChange:P,onPaste:R,onCompositionStart:z,onCompositionEnd:B,tabIndex:w,attrs:Gr(t,!0)}),d.createElement("span",{ref:_,className:"".concat(q,"-search-mirror"),"aria-hidden":!0},J," ")),ae=d.createElement(Di,{prefixCls:"".concat(q,"-overflow"),data:o,renderItem:ce,renderRest:se,suffix:ne,itemKey:OF,maxCount:S});return d.createElement(d.Fragment,null,ae,!o.length&&!J&&d.createElement("span",{className:"".concat(q,"-placeholder")},u))},IF=function(t){var n=t.inputElement,r=t.prefixCls,o=t.id,i=t.inputRef,a=t.disabled,s=t.autoFocus,c=t.autoComplete,u=t.activeDescendantId,p=t.mode,v=t.open,h=t.values,m=t.placeholder,b=t.tabIndex,y=t.showSearch,w=t.searchValue,C=t.activeValue,S=t.maxLength,E=t.onInputKeyDown,k=t.onInputMouseDown,O=t.onInputChange,$=t.onInputPaste,T=t.onInputCompositionStart,M=t.onInputCompositionEnd,P=t.title,R=d.useState(!1),A=ve(R,2),V=A[0],z=A[1],B=p==="combobox",_=B||y,H=h[0],j=w||"";B&&C&&!V&&(j=C),d.useEffect(function(){B&&z(!1)},[B,C]);var L=p!=="combobox"&&!v&&!y?!1:!!j,F=P===void 0?tP(H):P,U=d.useMemo(function(){return H?null:d.createElement("span",{className:"".concat(r,"-selection-placeholder"),style:L?{visibility:"hidden"}:void 0},m)},[H,L,m,r]);return d.createElement(d.Fragment,null,d.createElement("span",{className:"".concat(r,"-selection-search")},d.createElement(JT,{ref:i,prefixCls:r,id:o,open:v,inputElement:n,disabled:a,autoFocus:s,autoComplete:c,editable:_,activeDescendantId:u,value:j,onKeyDown:E,onMouseDown:k,onChange:function(W){z(!0),O(W)},onPaste:$,onCompositionStart:T,onCompositionEnd:M,tabIndex:b,attrs:Gr(t,!0),maxLength:B?S:void 0})),!B&&H?d.createElement("span",{className:"".concat(r,"-selection-item"),title:F,style:L?{visibility:"hidden"}:void 0},H.label):null,U)},TF=function(t,n){var r=d.useRef(null),o=d.useRef(!1),i=t.prefixCls,a=t.open,s=t.mode,c=t.showSearch,u=t.tokenWithEnter,p=t.disabled,v=t.autoClearSearchValue,h=t.onSearch,m=t.onSearchSubmit,b=t.onToggleOpen,y=t.onInputKeyDown,w=t.domRef;d.useImperativeHandle(n,function(){return{focus:function(L){r.current.focus(L)},blur:function(){r.current.blur()}}});var C=GT(0),S=ve(C,2),E=S[0],k=S[1],O=function(L){var F=L.which;(F===De.UP||F===De.DOWN)&&L.preventDefault(),y&&y(L),F===De.ENTER&&s==="tags"&&!o.current&&!a&&(m==null||m(L.target.value)),lF(F)&&b(!0)},$=function(){k(!0)},T=d.useRef(null),M=function(L){h(L,!0,o.current)!==!1&&b(!0)},P=function(){o.current=!0},R=function(L){o.current=!1,s!=="combobox"&&M(L.target.value)},A=function(L){var F=L.target.value;if(u&&T.current&&/[\r\n]/.test(T.current)){var U=T.current.replace(/[\r\n]+$/,"").replace(/\r\n/g," ").replace(/[\r\n]/g," ");F=F.replace(U,T.current)}T.current=null,M(F)},V=function(L){var F=L.clipboardData,U=F==null?void 0:F.getData("text");T.current=U||""},z=function(L){var F=L.target;if(F!==r.current){var U=document.body.style.msTouchAction!==void 0;U?setTimeout(function(){r.current.focus()}):r.current.focus()}},B=function(L){var F=E();L.target!==r.current&&!F&&!(s==="combobox"&&p)&&L.preventDefault(),(s!=="combobox"&&(!c||!F)||!a)&&(a&&v!==!1&&h("",!0,!1),b())},_={inputRef:r,onInputKeyDown:O,onInputMouseDown:$,onInputChange:A,onInputPaste:V,onInputCompositionStart:P,onInputCompositionEnd:R},H=s==="multiple"||s==="tags"?d.createElement($F,$e({},t,_)):d.createElement(IF,$e({},t,_));return d.createElement("div",{ref:w,className:"".concat(i,"-selector"),onClick:z,onMouseDown:B},H)},PF=d.forwardRef(TF);function MF(e){var t=e.prefixCls,n=e.align,r=e.arrow,o=e.arrowPos,i=r||{},a=i.className,s=i.content,c=o.x,u=c===void 0?0:c,p=o.y,v=p===void 0?0:p,h=d.useRef();if(!n||!n.points)return null;var m={position:"absolute"};if(n.autoArrow!==!1){var b=n.points[0],y=n.points[1],w=b[0],C=b[1],S=y[0],E=y[1];w===S||!["t","b"].includes(w)?m.top=v:w==="t"?m.top=0:m.bottom=0,C===E||!["l","r"].includes(C)?m.left=u:C==="l"?m.left=0:m.right=0}return d.createElement("div",{ref:h,className:ie("".concat(t,"-arrow"),a),style:m},s)}function NF(e){var t=e.prefixCls,n=e.open,r=e.zIndex,o=e.mask,i=e.motion;return o?d.createElement(Xo,$e({},i,{motionAppear:!0,visible:n,removeOnLeave:!0}),function(a){var s=a.className;return d.createElement("div",{style:{zIndex:r},className:ie("".concat(t,"-mask"),s)})}):null}var RF=d.memo(function(e){var t=e.children;return t},function(e,t){return t.cache}),DF=d.forwardRef(function(e,t){var n=e.popup,r=e.className,o=e.prefixCls,i=e.style,a=e.target,s=e.onVisibleChanged,c=e.open,u=e.keepDom,p=e.fresh,v=e.onClick,h=e.mask,m=e.arrow,b=e.arrowPos,y=e.align,w=e.motion,C=e.maskMotion,S=e.forceRender,E=e.getPopupContainer,k=e.autoDestroy,O=e.portal,$=e.zIndex,T=e.onMouseEnter,M=e.onMouseLeave,P=e.onPointerEnter,R=e.onPointerDownCapture,A=e.ready,V=e.offsetX,z=e.offsetY,B=e.offsetR,_=e.offsetB,H=e.onAlign,j=e.onPrepare,L=e.stretch,F=e.targetWidth,U=e.targetHeight,D=typeof n=="function"?n():n,W=c||u,G=(E==null?void 0:E.length)>0,q=d.useState(!E||!G),J=ve(q,2),Y=J[0],Q=J[1];if(sn(function(){!Y&&G&&a&&Q(!0)},[Y,G,a]),!Y)return null;var te="auto",ce={left:"-1000vw",top:"-1000vh",right:te,bottom:te};if(A||!c){var se,ne=y.points,ae=y.dynamicInset||((se=y._experimental)===null||se===void 0?void 0:se.dynamicInset),ee=ae&&ne[0][1]==="r",re=ae&&ne[0][0]==="b";ee?(ce.right=B,ce.left=te):(ce.left=V,ce.right=te),re?(ce.bottom=_,ce.top=te):(ce.top=z,ce.bottom=te)}var le={};return L&&(L.includes("height")&&U?le.height=U:L.includes("minHeight")&&U&&(le.minHeight=U),L.includes("width")&&F?le.width=F:L.includes("minWidth")&&F&&(le.minWidth=F)),c||(le.pointerEvents="none"),d.createElement(O,{open:S||W,getContainer:E&&function(){return E(a)},autoDestroy:k},d.createElement(NF,{prefixCls:o,open:c,zIndex:$,mask:h,motion:C}),d.createElement(qo,{onResize:H,disabled:!c},function(pe){return d.createElement(Xo,$e({motionAppear:!0,motionEnter:!0,motionLeave:!0,removeOnLeave:!1,forceRender:S,leavedClassName:"".concat(o,"-hidden")},w,{onAppearPrepare:j,onEnterPrepare:j,visible:c,onVisibleChanged:function(ge){var Re;w==null||(Re=w.onVisibleChanged)===null||Re===void 0||Re.call(w,ge),s(ge)}}),function(Oe,ge){var Re=Oe.className,ye=Oe.style,Te=ie(o,Re,r);return d.createElement("div",{ref:Wr(pe,t,ge),className:Te,style:Z(Z(Z(Z({"--arrow-x":"".concat(b.x||0,"px"),"--arrow-y":"".concat(b.y||0,"px")},ce),le),ye),{},{boxSizing:"border-box",zIndex:$},i),onMouseEnter:T,onMouseLeave:M,onPointerEnter:P,onClick:v,onPointerDownCapture:R},m&&d.createElement(MF,{prefixCls:o,arrow:m,arrowPos:b,align:y}),d.createElement(RF,{cache:!c&&!p},D))})}))}),jF=d.forwardRef(function(e,t){var n=e.children,r=e.getTriggerDOMNode,o=vi(n),i=d.useCallback(function(s){_u(t,r?r(s):s)},[r]),a=Bs(i,n.ref);return o?d.cloneElement(n,{ref:a}):n}),WE=d.createContext(null);function UE(e){return e?Array.isArray(e)?e:[e]:[]}function LF(e,t,n,r){return d.useMemo(function(){var o=UE(n??t),i=UE(r??t),a=new Set(o),s=new Set(i);return e&&(a.has("hover")&&(a.delete("hover"),a.add("click")),s.has("hover")&&(s.delete("hover"),s.add("click"))),[a,s]},[e,t,n,r])}function BF(){var e=arguments.length>0&&arguments[0]!==void 0?arguments[0]:[],t=arguments.length>1&&arguments[1]!==void 0?arguments[1]:[],n=arguments.length>2?arguments[2]:void 0;return n?e[0]===t[0]:e[0]===t[0]&&e[1]===t[1]}function AF(e,t,n,r){for(var o=n.points,i=Object.keys(e),a=0;a1&&arguments[1]!==void 0?arguments[1]:1;return Number.isNaN(e)?t:e}function eu(e){return rd(parseFloat(e),0)}function qE(e,t){var n=Z({},e);return(t||[]).forEach(function(r){if(!(r instanceof HTMLBodyElement||r instanceof HTMLHtmlElement)){var o=Od(r).getComputedStyle(r),i=o.overflow,a=o.overflowClipMargin,s=o.borderTopWidth,c=o.borderBottomWidth,u=o.borderLeftWidth,p=o.borderRightWidth,v=r.getBoundingClientRect(),h=r.offsetHeight,m=r.clientHeight,b=r.offsetWidth,y=r.clientWidth,w=eu(s),C=eu(c),S=eu(u),E=eu(p),k=rd(Math.round(v.width/b*1e3)/1e3),O=rd(Math.round(v.height/h*1e3)/1e3),$=(b-y-S-E)*k,T=(h-m-w-C)*O,M=w*O,P=C*O,R=S*k,A=E*k,V=0,z=0;if(i==="clip"){var B=eu(a);V=B*k,z=B*O}var _=v.x+R-V,H=v.y+M-z,j=_+v.width+2*V-R-A-$,L=H+v.height+2*z-M-P-T;n.left=Math.max(n.left,_),n.top=Math.max(n.top,H),n.right=Math.min(n.right,j),n.bottom=Math.min(n.bottom,L)}}),n}function XE(e){var t=arguments.length>1&&arguments[1]!==void 0?arguments[1]:0,n="".concat(t),r=n.match(/^(.*)\%$/);return r?e*(parseFloat(r[1])/100):parseFloat(n)}function GE(e,t){var n=t||[],r=ve(n,2),o=r[0],i=r[1];return[XE(e.width,o),XE(e.height,i)]}function YE(){var e=arguments.length>0&&arguments[0]!==void 0?arguments[0]:"";return[e[0],e[1]]}function fl(e,t){var n=t[0],r=t[1],o,i;return n==="t"?i=e.y:n==="b"?i=e.y+e.height:i=e.y+e.height/2,r==="l"?o=e.x:r==="r"?o=e.x+e.width:o=e.x+e.width/2,{x:o,y:i}}function $a(e,t){var n={t:"b",b:"t",l:"r",r:"l"};return e.map(function(r,o){return o===t?n[r]||"c":r}).join("")}function zF(e,t,n,r,o,i,a){var s=d.useState({ready:!1,offsetX:0,offsetY:0,offsetR:0,offsetB:0,arrowX:0,arrowY:0,scaleX:1,scaleY:1,align:o[r]||{}}),c=ve(s,2),u=c[0],p=c[1],v=d.useRef(0),h=d.useMemo(function(){return t?Eb(t):[]},[t]),m=d.useRef({}),b=function(){m.current={}};e||b();var y=gn(function(){if(t&&n&&e){let En=function(xr,Bn){var An=arguments.length>2&&arguments[2]!==void 0?arguments[2]:Ie,sr=q.x+xr,Zr=q.y+Bn,Hi=sr+ee,ko=Zr+ae,xc=Math.max(sr,An.left),bt=Math.max(Zr,An.top),Nt=Math.min(Hi,An.right),Mn=Math.min(ko,An.bottom);return Math.max(0,(Nt-xc)*(Mn-bt))},Pn=function(){Kt=q.y+We,ln=Kt+ae,Yt=q.x+He,un=Yt+ee};var S,E,k,O,$=t,T=$.ownerDocument,M=Od($),P=M.getComputedStyle($),R=P.width,A=P.height,V=P.position,z=$.style.left,B=$.style.top,_=$.style.right,H=$.style.bottom,j=$.style.overflow,L=Z(Z({},o[r]),i),F=T.createElement("div");(S=$.parentElement)===null||S===void 0||S.appendChild(F),F.style.left="".concat($.offsetLeft,"px"),F.style.top="".concat($.offsetTop,"px"),F.style.position=V,F.style.height="".concat($.offsetHeight,"px"),F.style.width="".concat($.offsetWidth,"px"),$.style.left="0",$.style.top="0",$.style.right="auto",$.style.bottom="auto",$.style.overflow="hidden";var U;if(Array.isArray(n))U={x:n[0],y:n[1],width:0,height:0};else{var D,W,G=n.getBoundingClientRect();G.x=(D=G.x)!==null&&D!==void 0?D:G.left,G.y=(W=G.y)!==null&&W!==void 0?W:G.top,U={x:G.x,y:G.y,width:G.width,height:G.height}}var q=$.getBoundingClientRect();q.x=(E=q.x)!==null&&E!==void 0?E:q.left,q.y=(k=q.y)!==null&&k!==void 0?k:q.top;var J=T.documentElement,Y=J.clientWidth,Q=J.clientHeight,te=J.scrollWidth,ce=J.scrollHeight,se=J.scrollTop,ne=J.scrollLeft,ae=q.height,ee=q.width,re=U.height,le=U.width,pe={left:0,top:0,right:Y,bottom:Q},Oe={left:-ne,top:-se,right:te-ne,bottom:ce-se},ge=L.htmlRegion,Re="visible",ye="visibleFirst";ge!=="scroll"&&ge!==ye&&(ge=Re);var Te=ge===ye,Ae=qE(Oe,h),me=qE(pe,h),Ie=ge===Re?me:Ae,Le=Te?me:Ie;$.style.left="auto",$.style.top="auto",$.style.right="0",$.style.bottom="0";var Be=$.getBoundingClientRect();$.style.left=z,$.style.top=B,$.style.right=_,$.style.bottom=H,$.style.overflow=j,(O=$.parentElement)===null||O===void 0||O.removeChild(F);var et=rd(Math.round(ee/parseFloat(R)*1e3)/1e3),rt=rd(Math.round(ae/parseFloat(A)*1e3)/1e3);if(et===0||rt===0||Fu(n)&&!xd(n))return;var Ze=L.offset,Ve=L.targetOffset,Ye=GE(q,Ze),Ge=ve(Ye,2),Fe=Ge[0],we=Ge[1],ze=GE(U,Ve),Me=ve(ze,2),Pe=Me[0],Ke=Me[1];U.x-=Pe,U.y-=Ke;var St=L.points||[],Ft=ve(St,2),Lt=Ft[0],Ct=Ft[1],Xt=YE(Ct),Pt=YE(Lt),Gt=fl(U,Xt),ft=fl(q,Pt),Je=Z({},L),He=Gt.x-ft.x+Fe,We=Gt.y-ft.y+we,Et=En(He,We),wt=En(He,We,me),_e=fl(U,["t","l"]),qe=fl(q,["t","l"]),ot=fl(U,["b","r"]),at=fl(q,["b","r"]),xt=L.overflow||{},_t=xt.adjustX,pt=xt.adjustY,dt=xt.shiftX,$t=xt.shiftY,kt=function(Bn){return typeof Bn=="boolean"?Bn:Bn>=0},Kt,ln,Yt,un;Pn();var ut=kt(pt),lt=Pt[0]===Xt[0];if(ut&&Pt[0]==="t"&&(ln>Le.bottom||m.current.bt)){var gt=We;lt?gt-=ae-re:gt=_e.y-at.y-we;var Qt=En(He,gt),dn=En(He,gt,me);Qt>Et||Qt===Et&&(!Te||dn>=wt)?(m.current.bt=!0,We=gt,we=-we,Je.points=[$a(Pt,0),$a(Xt,0)]):m.current.bt=!1}if(ut&&Pt[0]==="b"&&(KtEt||Sn===Et&&(!Te||Xn>=wt)?(m.current.tb=!0,We=tn,we=-we,Je.points=[$a(Pt,0),$a(Xt,0)]):m.current.tb=!1}var or=kt(_t),tr=Pt[1]===Xt[1];if(or&&Pt[1]==="l"&&(un>Le.right||m.current.rl)){var mt=He;tr?mt-=ee-le:mt=_e.x-at.x-Fe;var Bt=En(mt,We),Zt=En(mt,We,me);Bt>Et||Bt===Et&&(!Te||Zt>=wt)?(m.current.rl=!0,He=mt,Fe=-Fe,Je.points=[$a(Pt,1),$a(Xt,1)]):m.current.rl=!1}if(or&&Pt[1]==="r"&&(YtEt||dr===Et&&(!Te||Gn>=wt)?(m.current.lr=!0,He=hn,Fe=-Fe,Je.points=[$a(Pt,1),$a(Xt,1)]):m.current.lr=!1}Pn();var fr=dt===!0?0:dt;typeof fr=="number"&&(Ytme.right&&(He-=un-me.right-Fe,U.x>me.right-fr&&(He+=U.x-me.right+fr)));var pr=$t===!0?0:$t;typeof pr=="number"&&(Ktme.bottom&&(We-=ln-me.bottom-we,U.y>me.bottom-pr&&(We+=U.y-me.bottom+pr)));var Qr=q.x+He,vr=Qr+ee,Tn=q.y+We,Vt=Tn+ae,ct=U.x,Rt=ct+le,Ht=U.y,Jt=Ht+re,an=Math.max(Qr,ct),_n=Math.min(vr,Rt),Cn=(an+_n)/2,hr=Cn-Qr,ir=Math.max(Tn,Ht),Wt=Math.min(Vt,Jt),ar=(ir+Wt)/2,Tr=ar-Tn;a==null||a(t,Je);var At=Be.right-q.x-(He+q.width),zt=Be.bottom-q.y-(We+q.height);et===1&&(He=Math.round(He),At=Math.round(At)),rt===1&&(We=Math.round(We),zt=Math.round(zt));var Vn={ready:!0,offsetX:He/et,offsetY:We/rt,offsetR:At/et,offsetB:zt/rt,arrowX:hr/et,arrowY:Tr/rt,scaleX:et,scaleY:rt,align:Je};p(Vn)}}),w=function(){v.current+=1;var E=v.current;Promise.resolve().then(function(){v.current===E&&y()})},C=function(){p(function(E){return Z(Z({},E),{},{ready:!1})})};return sn(C,[r]),sn(function(){e||C()},[e]),[u.ready,u.offsetX,u.offsetY,u.offsetR,u.offsetB,u.arrowX,u.arrowY,u.scaleX,u.scaleY,u.align,w]}function HF(e,t,n,r,o){sn(function(){if(e&&t&&n){let v=function(){r(),o()};var i=t,a=n,s=Eb(i),c=Eb(a),u=Od(a),p=new Set([u].concat(Se(s),Se(c)));return p.forEach(function(h){h.addEventListener("scroll",v,{passive:!0})}),u.addEventListener("resize",v,{passive:!0}),r(),function(){p.forEach(function(h){h.removeEventListener("scroll",v),u.removeEventListener("resize",v)})}}},[e,t,n])}function FF(e,t,n,r,o,i,a,s){var c=d.useRef(e);c.current=e;var u=d.useRef(!1);d.useEffect(function(){if(t&&r&&(!o||i)){var v=function(){u.current=!1},h=function(w){var C;c.current&&!a(((C=w.composedPath)===null||C===void 0||(C=C.call(w))===null||C===void 0?void 0:C[0])||w.target)&&!u.current&&s(!1)},m=Od(r);m.addEventListener("pointerdown",v,!0),m.addEventListener("mousedown",h,!0),m.addEventListener("contextmenu",h,!0);var b=Xp(n);return b&&(b.addEventListener("mousedown",h,!0),b.addEventListener("contextmenu",h,!0)),function(){m.removeEventListener("pointerdown",v,!0),m.removeEventListener("mousedown",h,!0),m.removeEventListener("contextmenu",h,!0),b&&(b.removeEventListener("mousedown",h,!0),b.removeEventListener("contextmenu",h,!0))}}},[t,n,r,o,i]);function p(){u.current=!0}return p}var _F=["prefixCls","children","action","showAction","hideAction","popupVisible","defaultPopupVisible","onPopupVisibleChange","afterPopupVisibleChange","mouseEnterDelay","mouseLeaveDelay","focusDelay","blurDelay","mask","maskClosable","getPopupContainer","forceRender","autoDestroy","destroyPopupOnHide","popup","popupClassName","popupStyle","popupPlacement","builtinPlacements","popupAlign","zIndex","stretch","getPopupClassNameFromAlign","fresh","alignPoint","onPopupClick","onPopupAlign","arrow","popupMotion","maskMotion","popupTransitionName","popupAnimation","maskTransitionName","maskAnimation","className","getTriggerDOMNode"];function VF(){var e=arguments.length>0&&arguments[0]!==void 0?arguments[0]:pw,t=d.forwardRef(function(n,r){var o=n.prefixCls,i=o===void 0?"rc-trigger-popup":o,a=n.children,s=n.action,c=s===void 0?"hover":s,u=n.showAction,p=n.hideAction,v=n.popupVisible,h=n.defaultPopupVisible,m=n.onPopupVisibleChange,b=n.afterPopupVisibleChange,y=n.mouseEnterDelay,w=n.mouseLeaveDelay,C=w===void 0?.1:w,S=n.focusDelay,E=n.blurDelay,k=n.mask,O=n.maskClosable,$=O===void 0?!0:O,T=n.getPopupContainer,M=n.forceRender,P=n.autoDestroy,R=n.destroyPopupOnHide,A=n.popup,V=n.popupClassName,z=n.popupStyle,B=n.popupPlacement,_=n.builtinPlacements,H=_===void 0?{}:_,j=n.popupAlign,L=n.zIndex,F=n.stretch,U=n.getPopupClassNameFromAlign,D=n.fresh,W=n.alignPoint,G=n.onPopupClick,q=n.onPopupAlign,J=n.arrow,Y=n.popupMotion,Q=n.maskMotion,te=n.popupTransitionName,ce=n.popupAnimation,se=n.maskTransitionName,ne=n.maskAnimation,ae=n.className,ee=n.getTriggerDOMNode,re=Mt(n,_F),le=P||R||!1,pe=d.useState(!1),Oe=ve(pe,2),ge=Oe[0],Re=Oe[1];sn(function(){Re(KT())},[]);var ye=d.useRef({}),Te=d.useContext(WE),Ae=d.useMemo(function(){return{registerSubPopup:function(Nt,Mn){ye.current[Nt]=Mn,Te==null||Te.registerSubPopup(Nt,Mn)}}},[Te]),me=vT(),Ie=d.useState(null),Le=ve(Ie,2),Be=Le[0],et=Le[1],rt=d.useRef(null),Ze=gn(function(bt){rt.current=bt,Fu(bt)&&Be!==bt&&et(bt),Te==null||Te.registerSubPopup(me,bt)}),Ve=d.useState(null),Ye=ve(Ve,2),Ge=Ye[0],Fe=Ye[1],we=d.useRef(null),ze=gn(function(bt){Fu(bt)&&Ge!==bt&&(Fe(bt),we.current=bt)}),Me=d.Children.only(a),Pe=(Me==null?void 0:Me.props)||{},Ke={},St=gn(function(bt){var Nt,Mn,Zn=Ge;return(Zn==null?void 0:Zn.contains(bt))||((Nt=Xp(Zn))===null||Nt===void 0?void 0:Nt.host)===bt||bt===Zn||(Be==null?void 0:Be.contains(bt))||((Mn=Xp(Be))===null||Mn===void 0?void 0:Mn.host)===bt||bt===Be||Object.values(ye.current).some(function(On){return(On==null?void 0:On.contains(bt))||bt===On})}),Ft=KE(i,Y,ce,te),Lt=KE(i,Q,ne,se),Ct=d.useState(h||!1),Xt=ve(Ct,2),Pt=Xt[0],Gt=Xt[1],ft=v??Pt,Je=gn(function(bt){v===void 0&&Gt(bt)});sn(function(){Gt(v||!1)},[v]);var He=d.useRef(ft);He.current=ft;var We=d.useRef([]);We.current=[];var Et=gn(function(bt){var Nt;Je(bt),((Nt=We.current[We.current.length-1])!==null&&Nt!==void 0?Nt:ft)!==bt&&(We.current.push(bt),m==null||m(bt))}),wt=d.useRef(),_e=function(){clearTimeout(wt.current)},qe=function(Nt){var Mn=arguments.length>1&&arguments[1]!==void 0?arguments[1]:0;_e(),Mn===0?Et(Nt):wt.current=setTimeout(function(){Et(Nt)},Mn*1e3)};d.useEffect(function(){return _e},[]);var ot=d.useState(!1),at=ve(ot,2),xt=at[0],_t=at[1];sn(function(bt){(!bt||ft)&&_t(!0)},[ft]);var pt=d.useState(null),dt=ve(pt,2),$t=dt[0],kt=dt[1],Kt=d.useState(null),ln=ve(Kt,2),Yt=ln[0],un=ln[1],ut=function(Nt){un([Nt.clientX,Nt.clientY])},lt=zF(ft,Be,W&&Yt!==null?Yt:Ge,B,H,j,q),gt=ve(lt,11),Qt=gt[0],dn=gt[1],tn=gt[2],Sn=gt[3],Xn=gt[4],or=gt[5],tr=gt[6],mt=gt[7],Bt=gt[8],Zt=gt[9],hn=gt[10],dr=LF(ge,c,u,p),Gn=ve(dr,2),fr=Gn[0],pr=Gn[1],Qr=fr.has("click"),vr=pr.has("click")||pr.has("contextMenu"),Tn=gn(function(){xt||hn()}),Vt=function(){He.current&&W&&vr&&qe(!1)};HF(ft,Ge,Be,Tn,Vt),sn(function(){Tn()},[Yt,B]),sn(function(){ft&&!(H!=null&&H[B])&&Tn()},[JSON.stringify(j)]);var ct=d.useMemo(function(){var bt=AF(H,i,Zt,W);return ie(bt,U==null?void 0:U(Zt))},[Zt,U,H,i,W]);d.useImperativeHandle(r,function(){return{nativeElement:we.current,popupElement:rt.current,forceAlign:Tn}});var Rt=d.useState(0),Ht=ve(Rt,2),Jt=Ht[0],an=Ht[1],_n=d.useState(0),Cn=ve(_n,2),hr=Cn[0],ir=Cn[1],Wt=function(){if(F&&Ge){var Nt=Ge.getBoundingClientRect();an(Nt.width),ir(Nt.height)}},ar=function(){Wt(),Tn()},Tr=function(Nt){_t(!1),hn(),b==null||b(Nt)},At=function(){return new Promise(function(Nt){Wt(),kt(function(){return Nt})})};sn(function(){$t&&(hn(),$t(),kt(null))},[$t]);function zt(bt,Nt,Mn,Zn){Ke[bt]=function(On){var Ja;Zn==null||Zn(On),qe(Nt,Mn);for(var Sc=arguments.length,da=new Array(Sc>1?Sc-1:0),xi=1;xi1?Mn-1:0),On=1;On1?Mn-1:0),On=1;On1&&arguments[1]!==void 0?arguments[1]:{},n=t.fieldNames,r=t.childrenAsData,o=[],i=nP(n,!1),a=i.label,s=i.value,c=i.options,u=i.groupLabel;function p(v,h){Array.isArray(v)&&v.forEach(function(m){if(h||!(c in m)){var b=m[s];o.push({key:QE(m,o.length),groupOption:h,data:m,label:m[a],value:b})}else{var y=m[u];y===void 0&&r&&(y=m.label),o.push({key:QE(m,o.length),group:!0,data:m,label:y}),p(m[c],!0)}})}return p(e,!1),o}function Ob(e){var t=Z({},e);return"props"in t||Object.defineProperty(t,"props",{get:function(){return Fn(!1,"Return type is option instead of Option instance. Please read value directly instead of reading from `props`."),t}}),t}var GF=function(t,n,r){if(!n||!n.length)return null;var o=!1,i=function s(c,u){var p=fI(u),v=p[0],h=p.slice(1);if(!v)return[c];var m=c.split(v);return o=o||m.length>1,m.reduce(function(b,y){return[].concat(Se(b),Se(s(y,h)))},[]).filter(Boolean)},a=i(t,n);return o?typeof r<"u"?a.slice(0,r):a:null},yw=d.createContext(null);function YF(e){var t=e.visible,n=e.values;if(!t)return null;var r=50;return d.createElement("span",{"aria-live":"polite",style:{width:0,height:0,position:"absolute",overflow:"hidden",opacity:0}},"".concat(n.slice(0,r).map(function(o){var i=o.label,a=o.value;return["number","string"].includes(st(i))?i:a}).join(", ")),n.length>r?", ...":null)}var QF=["id","prefixCls","className","showSearch","tagRender","direction","omitDomProps","displayValues","onDisplayValuesChange","emptyOptions","notFoundContent","onClear","mode","disabled","loading","getInputElement","getRawInputElement","open","defaultOpen","onDropdownVisibleChange","activeValue","onActiveValueChange","activeDescendantId","searchValue","autoClearSearchValue","onSearch","onSearchSplit","tokenSeparators","allowClear","suffixIcon","clearIcon","OptionList","animation","transitionName","dropdownStyle","dropdownClassName","dropdownMatchSelectWidth","dropdownRender","dropdownAlign","placement","builtinPlacements","getPopupContainer","showAction","onFocus","onBlur","onKeyUp","onKeyDown","onMouseDown"],ZF=["value","onChange","removeIcon","placeholder","autoFocus","maxTagCount","maxTagTextLength","maxTagPlaceholder","choiceTransitionName","onInputKeyDown","onPopupScroll","tabIndex"],$b=function(t){return t==="tags"||t==="multiple"},rP=d.forwardRef(function(e,t){var n,r=e.id,o=e.prefixCls,i=e.className,a=e.showSearch,s=e.tagRender,c=e.direction,u=e.omitDomProps,p=e.displayValues,v=e.onDisplayValuesChange,h=e.emptyOptions,m=e.notFoundContent,b=m===void 0?"Not Found":m,y=e.onClear,w=e.mode,C=e.disabled,S=e.loading,E=e.getInputElement,k=e.getRawInputElement,O=e.open,$=e.defaultOpen,T=e.onDropdownVisibleChange,M=e.activeValue,P=e.onActiveValueChange,R=e.activeDescendantId,A=e.searchValue,V=e.autoClearSearchValue,z=e.onSearch,B=e.onSearchSplit,_=e.tokenSeparators,H=e.allowClear,j=e.suffixIcon,L=e.clearIcon,F=e.OptionList,U=e.animation,D=e.transitionName,W=e.dropdownStyle,G=e.dropdownClassName,q=e.dropdownMatchSelectWidth,J=e.dropdownRender,Y=e.dropdownAlign,Q=e.placement,te=e.builtinPlacements,ce=e.getPopupContainer,se=e.showAction,ne=se===void 0?[]:se,ae=e.onFocus,ee=e.onBlur,re=e.onKeyUp,le=e.onKeyDown,pe=e.onMouseDown,Oe=Mt(e,QF),ge=$b(w),Re=(a!==void 0?a:ge)||w==="combobox",ye=Z({},Oe);ZF.forEach(function(Vt){delete ye[Vt]}),u==null||u.forEach(function(Vt){delete ye[Vt]});var Te=d.useState(!1),Ae=ve(Te,2),me=Ae[0],Ie=Ae[1];d.useEffect(function(){Ie(KT())},[]);var Le=d.useRef(null),Be=d.useRef(null),et=d.useRef(null),rt=d.useRef(null),Ze=d.useRef(null),Ve=d.useRef(!1),Ye=aF(),Ge=ve(Ye,3),Fe=Ge[0],we=Ge[1],ze=Ge[2];d.useImperativeHandle(t,function(){var Vt,ct;return{focus:(Vt=rt.current)===null||Vt===void 0?void 0:Vt.focus,blur:(ct=rt.current)===null||ct===void 0?void 0:ct.blur,scrollTo:function(Ht){var Jt;return(Jt=Ze.current)===null||Jt===void 0?void 0:Jt.scrollTo(Ht)},nativeElement:Le.current||Be.current}});var Me=d.useMemo(function(){var Vt;if(w!=="combobox")return A;var ct=(Vt=p[0])===null||Vt===void 0?void 0:Vt.value;return typeof ct=="string"||typeof ct=="number"?String(ct):""},[A,w,p]),Pe=w==="combobox"&&typeof E=="function"&&E()||null,Ke=typeof k=="function"&&k(),St=Bs(Be,Ke==null||(n=Ke.props)===null||n===void 0?void 0:n.ref),Ft=d.useState(!1),Lt=ve(Ft,2),Ct=Lt[0],Xt=Lt[1];sn(function(){Xt(!0)},[]);var Pt=Dn(!1,{defaultValue:$,value:O}),Gt=ve(Pt,2),ft=Gt[0],Je=Gt[1],He=Ct?ft:!1,We=!b&&h;(C||We&&He&&w==="combobox")&&(He=!1);var Et=We?!1:He,wt=d.useCallback(function(Vt){var ct=Vt!==void 0?Vt:!He;C||(Je(ct),He!==ct&&(T==null||T(ct)))},[C,He,Je,T]),_e=d.useMemo(function(){return(_||[]).some(function(Vt){return[` -`,`\r -`].includes(Vt)})},[_]),qe=d.useContext(yw)||{},ot=qe.maxCount,at=qe.rawValues,xt=function(ct,Rt,Ht){if(!(ge&&kb(ot)&&(at==null?void 0:at.size)>=ot)){var Jt=!0,an=ct;P==null||P(null);var _n=GF(ct,_,kb(ot)?ot-at.size:void 0),Cn=Ht?null:_n;return w!=="combobox"&&Cn&&(an="",B==null||B(Cn),wt(!1),Jt=!1),z&&Me!==an&&z(an,{source:Rt?"typing":"effect"}),Jt}},_t=function(ct){!ct||!ct.trim()||z(ct,{source:"submit"})};d.useEffect(function(){!He&&!ge&&w!=="combobox"&&xt("",!1,!1)},[He]),d.useEffect(function(){ft&&C&&Je(!1),C&&!Ve.current&&we(!1)},[C]);var pt=GT(),dt=ve(pt,2),$t=dt[0],kt=dt[1],Kt=d.useRef(!1),ln=function(ct){var Rt=$t(),Ht=ct.key,Jt=Ht==="Enter";if(Jt&&(w!=="combobox"&&ct.preventDefault(),He||wt(!0)),kt(!!Me),Ht==="Backspace"&&!Rt&&ge&&!Me&&p.length){for(var an=Se(p),_n=null,Cn=an.length-1;Cn>=0;Cn-=1){var hr=an[Cn];if(!hr.disabled){an.splice(Cn,1),_n=hr;break}}_n&&v(an,{type:"remove",values:[_n]})}for(var ir=arguments.length,Wt=new Array(ir>1?ir-1:0),ar=1;ar1?Rt-1:0),Jt=1;Jt1?_n-1:0),hr=1;hr<_n;hr++)Cn[hr-1]=arguments[hr];pe==null||pe.apply(void 0,[ct].concat(Cn))},tn=d.useState({}),Sn=ve(tn,2),Xn=Sn[1];function or(){Xn({})}var tr;Ke&&(tr=function(ct){wt(ct)}),sF(function(){var Vt;return[Le.current,(Vt=et.current)===null||Vt===void 0?void 0:Vt.getPopupElement()]},Et,wt,!!Ke);var mt=d.useMemo(function(){return Z(Z({},e),{},{notFoundContent:b,open:He,triggerOpen:Et,id:r,showSearch:Re,multiple:ge,toggleOpen:wt})},[e,b,Et,He,r,Re,ge,wt]),Bt=!!j||S,Zt;Bt&&(Zt=d.createElement(Uv,{className:ie("".concat(o,"-arrow"),K({},"".concat(o,"-arrow-loading"),S)),customizeIcon:j,customizeIconProps:{loading:S,searchValue:Me,open:He,focused:Fe,showSearch:Re}}));var hn=function(){var ct;y==null||y(),(ct=rt.current)===null||ct===void 0||ct.focus(),v([],{type:"clear",values:p}),xt("",!1,!1)},dr=iF(o,hn,p,H,L,C,Me,w),Gn=dr.allowClear,fr=dr.clearIcon,pr=d.createElement(F,{ref:Ze}),Qr=ie(o,i,K(K(K(K(K(K(K(K(K(K({},"".concat(o,"-focused"),Fe),"".concat(o,"-multiple"),ge),"".concat(o,"-single"),!ge),"".concat(o,"-allow-clear"),H),"".concat(o,"-show-arrow"),Bt),"".concat(o,"-disabled"),C),"".concat(o,"-loading"),S),"".concat(o,"-open"),He),"".concat(o,"-customize-input"),Pe),"".concat(o,"-show-search"),Re)),vr=d.createElement(qF,{ref:et,disabled:C,prefixCls:o,visible:Et,popupElement:pr,animation:U,transitionName:D,dropdownStyle:W,dropdownClassName:G,direction:c,dropdownMatchSelectWidth:q,dropdownRender:J,dropdownAlign:Y,placement:Q,builtinPlacements:te,getPopupContainer:ce,empty:h,getTriggerDOMNode:function(ct){return Be.current||ct},onPopupVisibleChange:tr,onPopupMouseEnter:or},Ke?d.cloneElement(Ke,{ref:St}):d.createElement(PF,$e({},e,{domRef:Be,prefixCls:o,inputElement:Pe,ref:rt,id:r,showSearch:Re,autoClearSearchValue:V,mode:w,activeDescendantId:R,tagRender:s,values:p,open:He,onToggleOpen:wt,activeValue:M,searchValue:Me,onSearch:xt,onSearchSubmit:_t,onRemove:un,tokenWithEnter:_e}))),Tn;return Ke?Tn=vr:Tn=d.createElement("div",$e({className:Qr},ye,{ref:Le,onMouseDown:dn,onKeyDown:ln,onKeyUp:Yt,onFocus:lt,onBlur:gt}),d.createElement(YF,{visible:Fe&&!He,values:p}),vr,Zt,Gn&&fr),d.createElement(qT.Provider,{value:mt},Tn)}),ww=function(){return null};ww.isSelectOptGroup=!0;var xw=function(){return null};xw.isSelectOption=!0;var oP=d.forwardRef(function(e,t){var n=e.height,r=e.offsetY,o=e.offsetX,i=e.children,a=e.prefixCls,s=e.onInnerResize,c=e.innerProps,u=e.rtl,p=e.extra,v={},h={display:"flex",flexDirection:"column"};return r!==void 0&&(v={height:n,position:"relative",overflow:"hidden"},h=Z(Z({},h),{},K(K(K(K(K({transform:"translateY(".concat(r,"px)")},u?"marginRight":"marginLeft",-o),"position","absolute"),"left",0),"right",0),"top",0))),d.createElement("div",{style:v},d.createElement(qo,{onResize:function(b){var y=b.offsetHeight;y&&s&&s()}},d.createElement("div",$e({style:h,className:ie(K({},"".concat(a,"-holder-inner"),a)),ref:t},c),i,p)))});oP.displayName="Filler";function JF(e){var t=e.children,n=e.setRef,r=d.useCallback(function(o){n(o)},[]);return d.cloneElement(t,{ref:r})}function e_(e,t,n,r,o,i,a,s){var c=s.getKey;return e.slice(t,n+1).map(function(u,p){var v=t+p,h=a(u,v,{style:{width:r},offsetX:o}),m=c(u);return d.createElement(JF,{key:m,setRef:function(y){return i(u,y)}},h)})}function t_(e,t,n){var r=e.length,o=t.length,i,a;if(r===0&&o===0)return null;r"u"?"undefined":st(navigator))==="object"&&/Firefox/i.test(navigator.userAgent);const iP=function(e,t,n,r){var o=d.useRef(!1),i=d.useRef(null);function a(){clearTimeout(i.current),o.current=!0,i.current=setTimeout(function(){o.current=!1},50)}var s=d.useRef({top:e,bottom:t,left:n,right:r});return s.current.top=e,s.current.bottom=t,s.current.left=n,s.current.right=r,function(c,u){var p=arguments.length>2&&arguments[2]!==void 0?arguments[2]:!1,v=c?u<0&&s.current.left||u>0&&s.current.right:u<0&&s.current.top||u>0&&s.current.bottom;return p&&v?(clearTimeout(i.current),o.current=!1):(!v||o.current)&&a(),!o.current&&v}};function r_(e,t,n,r,o,i,a){var s=d.useRef(0),c=d.useRef(null),u=d.useRef(null),p=d.useRef(!1),v=iP(t,n,r,o);function h(S,E){if(bn.cancel(c.current),!v(!1,E)){var k=S;if(!k._virtualHandled)k._virtualHandled=!0;else return;s.current+=E,u.current=E,ZE||k.preventDefault(),c.current=bn(function(){var O=p.current?10:1;a(s.current*O,!1),s.current=0})}}function m(S,E){a(E,!0),ZE||S.preventDefault()}var b=d.useRef(null),y=d.useRef(null);function w(S){if(e){bn.cancel(y.current),y.current=bn(function(){b.current=null},2);var E=S.deltaX,k=S.deltaY,O=S.shiftKey,$=E,T=k;(b.current==="sx"||!b.current&&O&&k&&!E)&&($=k,T=0,b.current="sx");var M=Math.abs($),P=Math.abs(T);b.current===null&&(b.current=i&&M>P?"x":"y"),b.current==="y"?h(S,T):m(S,$)}}function C(S){e&&(p.current=S.detail===u.current)}return[w,C]}function o_(e,t,n,r){var o=d.useMemo(function(){return[new Map,[]]},[e,n.id,r]),i=ve(o,2),a=i[0],s=i[1],c=function(p){var v=arguments.length>1&&arguments[1]!==void 0?arguments[1]:p,h=a.get(p),m=a.get(v);if(h===void 0||m===void 0)for(var b=e.length,y=s.length;y0&&arguments[0]!==void 0?arguments[0]:!1;p();var b=function(){s.current.forEach(function(w,C){if(w&&w.offsetParent){var S=wu(w),E=S.offsetHeight,k=getComputedStyle(S),O=k.marginTop,$=k.marginBottom,T=JE(O),M=JE($),P=E+T+M;c.current.get(C)!==P&&c.current.set(C,P)}}),a(function(w){return w+1})};m?b():u.current=bn(b)}function h(m,b){var y=e(m);s.current.get(y),b?(s.current.set(y,b),v()):s.current.delete(y)}return d.useEffect(function(){return p},[]),[h,v,c.current,i]}var ek=14/15;function s_(e,t,n){var r=d.useRef(!1),o=d.useRef(0),i=d.useRef(0),a=d.useRef(null),s=d.useRef(null),c,u=function(m){if(r.current){var b=Math.ceil(m.touches[0].pageX),y=Math.ceil(m.touches[0].pageY),w=o.current-b,C=i.current-y,S=Math.abs(w)>Math.abs(C);S?o.current=b:i.current=y;var E=n(S,S?w:C,!1,m);E&&m.preventDefault(),clearInterval(s.current),E&&(s.current=setInterval(function(){S?w*=ek:C*=ek;var k=Math.floor(S?w:C);(!n(S,k,!0)||Math.abs(k)<=.1)&&clearInterval(s.current)},16))}},p=function(){r.current=!1,c()},v=function(m){c(),m.touches.length===1&&!r.current&&(r.current=!0,o.current=Math.ceil(m.touches[0].pageX),i.current=Math.ceil(m.touches[0].pageY),a.current=m.target,a.current.addEventListener("touchmove",u,{passive:!1}),a.current.addEventListener("touchend",p,{passive:!0}))};c=function(){a.current&&(a.current.removeEventListener("touchmove",u),a.current.removeEventListener("touchend",p))},sn(function(){return e&&t.current.addEventListener("touchstart",v,{passive:!0}),function(){var h;(h=t.current)===null||h===void 0||h.removeEventListener("touchstart",v),c(),clearInterval(s.current)}},[e])}var l_=10;function c_(e,t,n,r,o,i,a,s){var c=d.useRef(),u=d.useState(null),p=ve(u,2),v=p[0],h=p[1];return sn(function(){if(v&&v.times=0;B-=1){var _=o(t[B]),H=n.get(_);if(H===void 0){S=!0;break}if(z-=H,z<=0)break}switch(O){case"top":k=T-w;break;case"bottom":k=M-C+w;break;default:var j=e.current.scrollTop,L=j+C;TL&&(E="bottom")}k!==null&&a(k),k!==v.lastTop&&(S=!0)}S&&h(Z(Z({},v),{},{times:v.times+1,targetAlign:E,lastTop:k}))}},[v,e.current]),function(m){if(m==null){s();return}if(bn.cancel(c.current),typeof m=="number")a(m);else if(m&&st(m)==="object"){var b,y=m.align;"index"in m?b=m.index:b=t.findIndex(function(S){return o(S)===m.key});var w=m.offset,C=w===void 0?0:w;h({times:0,index:b,offset:C,originAlign:y})}}}function tk(e,t){var n="touches"in e?e.touches[0]:e;return n[t?"pageX":"pageY"]}var nk=d.forwardRef(function(e,t){var n=e.prefixCls,r=e.rtl,o=e.scrollOffset,i=e.scrollRange,a=e.onStartMove,s=e.onStopMove,c=e.onScroll,u=e.horizontal,p=e.spinSize,v=e.containerSize,h=e.style,m=e.thumbStyle,b=d.useState(!1),y=ve(b,2),w=y[0],C=y[1],S=d.useState(null),E=ve(S,2),k=E[0],O=E[1],$=d.useState(null),T=ve($,2),M=T[0],P=T[1],R=!r,A=d.useRef(),V=d.useRef(),z=d.useState(!1),B=ve(z,2),_=B[0],H=B[1],j=d.useRef(),L=function(){clearTimeout(j.current),H(!0),j.current=setTimeout(function(){H(!1)},3e3)},F=i-v||0,U=v-p||0,D=d.useMemo(function(){if(o===0||F===0)return 0;var se=o/F;return se*U},[o,F,U]),W=function(ne){ne.stopPropagation(),ne.preventDefault()},G=d.useRef({top:D,dragging:w,pageY:k,startTop:M});G.current={top:D,dragging:w,pageY:k,startTop:M};var q=function(ne){C(!0),O(tk(ne,u)),P(G.current.top),a(),ne.stopPropagation(),ne.preventDefault()};d.useEffect(function(){var se=function(re){re.preventDefault()},ne=A.current,ae=V.current;return ne.addEventListener("touchstart",se,{passive:!1}),ae.addEventListener("touchstart",q,{passive:!1}),function(){ne.removeEventListener("touchstart",se),ae.removeEventListener("touchstart",q)}},[]);var J=d.useRef();J.current=F;var Y=d.useRef();Y.current=U,d.useEffect(function(){if(w){var se,ne=function(re){var le=G.current,pe=le.dragging,Oe=le.pageY,ge=le.startTop;bn.cancel(se);var Re=A.current.getBoundingClientRect(),ye=v/(u?Re.width:Re.height);if(pe){var Te=(tk(re,u)-Oe)*ye,Ae=ge;!R&&u?Ae-=Te:Ae+=Te;var me=J.current,Ie=Y.current,Le=Ie?Ae/Ie:0,Be=Math.ceil(Le*me);Be=Math.max(Be,0),Be=Math.min(Be,me),se=bn(function(){c(Be,u)})}},ae=function(){C(!1),s()};return window.addEventListener("mousemove",ne,{passive:!0}),window.addEventListener("touchmove",ne,{passive:!0}),window.addEventListener("mouseup",ae,{passive:!0}),window.addEventListener("touchend",ae,{passive:!0}),function(){window.removeEventListener("mousemove",ne),window.removeEventListener("touchmove",ne),window.removeEventListener("mouseup",ae),window.removeEventListener("touchend",ae),bn.cancel(se)}}},[w]),d.useEffect(function(){return L(),function(){clearTimeout(j.current)}},[o]),d.useImperativeHandle(t,function(){return{delayHidden:L}});var Q="".concat(n,"-scrollbar"),te={position:"absolute",visibility:_?null:"hidden"},ce={position:"absolute",background:"rgba(0, 0, 0, 0.5)",borderRadius:99,cursor:"pointer",userSelect:"none"};return u?(te.height=8,te.left=0,te.right=0,te.bottom=0,ce.height="100%",ce.width=p,R?ce.left=D:ce.right=D):(te.width=8,te.top=0,te.bottom=0,R?te.right=0:te.left=0,ce.width="100%",ce.height=p,ce.top=D),d.createElement("div",{ref:A,className:ie(Q,K(K(K({},"".concat(Q,"-horizontal"),u),"".concat(Q,"-vertical"),!u),"".concat(Q,"-visible"),_)),style:Z(Z({},te),h),onMouseDown:W,onMouseMove:L},d.createElement("div",{ref:V,className:ie("".concat(Q,"-thumb"),K({},"".concat(Q,"-thumb-moving"),w)),style:Z(Z({},ce),m),onMouseDown:q}))}),u_=20;function rk(){var e=arguments.length>0&&arguments[0]!==void 0?arguments[0]:0,t=arguments.length>1&&arguments[1]!==void 0?arguments[1]:0,n=e/t*e;return isNaN(n)&&(n=0),n=Math.max(n,u_),Math.floor(n)}var d_=["prefixCls","className","height","itemHeight","fullHeight","style","data","children","itemKey","virtual","direction","scrollWidth","component","onScroll","onVirtualScroll","onVisibleChange","innerProps","extraRender","styles"],f_=[],p_={overflowY:"auto",overflowAnchor:"none"};function v_(e,t){var n=e.prefixCls,r=n===void 0?"rc-virtual-list":n,o=e.className,i=e.height,a=e.itemHeight,s=e.fullHeight,c=s===void 0?!0:s,u=e.style,p=e.data,v=e.children,h=e.itemKey,m=e.virtual,b=e.direction,y=e.scrollWidth,w=e.component,C=w===void 0?"div":w,S=e.onScroll,E=e.onVirtualScroll,k=e.onVisibleChange,O=e.innerProps,$=e.extraRender,T=e.styles,M=Mt(e,d_),P=d.useCallback(function(ut){return typeof h=="function"?h(ut):ut==null?void 0:ut[h]},[h]),R=a_(P),A=ve(R,4),V=A[0],z=A[1],B=A[2],_=A[3],H=!!(m!==!1&&i&&a),j=d.useMemo(function(){return Object.values(B.maps).reduce(function(ut,lt){return ut+lt},0)},[B.id,B.maps]),L=H&&p&&(Math.max(a*p.length,j)>i||!!y),F=b==="rtl",U=ie(r,K({},"".concat(r,"-rtl"),F),o),D=p||f_,W=d.useRef(),G=d.useRef(),q=d.useRef(),J=d.useState(0),Y=ve(J,2),Q=Y[0],te=Y[1],ce=d.useState(0),se=ve(ce,2),ne=se[0],ae=se[1],ee=d.useState(!1),re=ve(ee,2),le=re[0],pe=re[1],Oe=function(){pe(!0)},ge=function(){pe(!1)},Re={getKey:P};function ye(ut){te(function(lt){var gt;typeof ut=="function"?gt=ut(lt):gt=ut;var Qt=Ct(gt);return W.current.scrollTop=Qt,Qt})}var Te=d.useRef({start:0,end:D.length}),Ae=d.useRef(),me=n_(D,P),Ie=ve(me,1),Le=Ie[0];Ae.current=Le;var Be=d.useMemo(function(){if(!H)return{scrollHeight:void 0,start:0,end:D.length-1,offset:void 0};if(!L){var ut;return{scrollHeight:((ut=G.current)===null||ut===void 0?void 0:ut.offsetHeight)||0,start:0,end:D.length-1,offset:void 0}}for(var lt=0,gt,Qt,dn,tn=D.length,Sn=0;Sn=Q&>===void 0&&(gt=Sn,Qt=lt),mt>Q+i&&dn===void 0&&(dn=Sn),lt=mt}return gt===void 0&&(gt=0,Qt=0,dn=Math.ceil(i/a)),dn===void 0&&(dn=D.length-1),dn=Math.min(dn+1,D.length-1),{scrollHeight:lt,start:gt,end:dn,offset:Qt}},[L,H,Q,D,_,i]),et=Be.scrollHeight,rt=Be.start,Ze=Be.end,Ve=Be.offset;Te.current.start=rt,Te.current.end=Ze;var Ye=d.useState({width:0,height:i}),Ge=ve(Ye,2),Fe=Ge[0],we=Ge[1],ze=function(lt){we({width:lt.offsetWidth,height:lt.offsetHeight})},Me=d.useRef(),Pe=d.useRef(),Ke=d.useMemo(function(){return rk(Fe.width,y)},[Fe.width,y]),St=d.useMemo(function(){return rk(Fe.height,et)},[Fe.height,et]),Ft=et-i,Lt=d.useRef(Ft);Lt.current=Ft;function Ct(ut){var lt=ut;return Number.isNaN(Lt.current)||(lt=Math.min(lt,Lt.current)),lt=Math.max(lt,0),lt}var Xt=Q<=0,Pt=Q>=Ft,Gt=ne<=0,ft=ne>=y,Je=iP(Xt,Pt,Gt,ft),He=function(){return{x:F?-ne:ne,y:Q}},We=d.useRef(He()),Et=gn(function(ut){if(E){var lt=Z(Z({},He()),ut);(We.current.x!==lt.x||We.current.y!==lt.y)&&(E(lt),We.current=lt)}});function wt(ut,lt){var gt=ut;lt?(pi.flushSync(function(){ae(gt)}),Et()):ye(gt)}function _e(ut){var lt=ut.currentTarget.scrollTop;lt!==Q&&ye(lt),S==null||S(ut),Et()}var qe=function(lt){var gt=lt,Qt=y?y-Fe.width:0;return gt=Math.max(gt,0),gt=Math.min(gt,Qt),gt},ot=gn(function(ut,lt){lt?(pi.flushSync(function(){ae(function(gt){var Qt=gt+(F?-ut:ut);return qe(Qt)})}),Et()):ye(function(gt){var Qt=gt+ut;return Qt})}),at=r_(H,Xt,Pt,Gt,ft,!!y,ot),xt=ve(at,2),_t=xt[0],pt=xt[1];s_(H,W,function(ut,lt,gt,Qt){var dn=Qt;return Je(ut,lt,gt)?!1:!dn||!dn._virtualHandled?(dn&&(dn._virtualHandled=!0),_t({preventDefault:function(){},deltaX:ut?lt:0,deltaY:ut?0:lt}),!0):!1}),sn(function(){function ut(gt){var Qt=Xt&>.detail<0,dn=Pt&>.detail>0;H&&!Qt&&!dn&>.preventDefault()}var lt=W.current;return lt.addEventListener("wheel",_t,{passive:!1}),lt.addEventListener("DOMMouseScroll",pt,{passive:!0}),lt.addEventListener("MozMousePixelScroll",ut,{passive:!1}),function(){lt.removeEventListener("wheel",_t),lt.removeEventListener("DOMMouseScroll",pt),lt.removeEventListener("MozMousePixelScroll",ut)}},[H,Xt,Pt]),sn(function(){if(y){var ut=qe(ne);ae(ut),Et({x:ut})}},[Fe.width,y]);var dt=function(){var lt,gt;(lt=Me.current)===null||lt===void 0||lt.delayHidden(),(gt=Pe.current)===null||gt===void 0||gt.delayHidden()},$t=c_(W,D,B,a,P,function(){return z(!0)},ye,dt);d.useImperativeHandle(t,function(){return{nativeElement:q.current,getScrollInfo:He,scrollTo:function(lt){function gt(Qt){return Qt&&st(Qt)==="object"&&("left"in Qt||"top"in Qt)}gt(lt)?(lt.left!==void 0&&ae(qe(lt.left)),$t(lt.top)):$t(lt)}}}),sn(function(){if(k){var ut=D.slice(rt,Ze+1);k(ut,D)}},[rt,Ze,D]);var kt=o_(D,P,B,a),Kt=$==null?void 0:$({start:rt,end:Ze,virtual:L,offsetX:ne,offsetY:Ve,rtl:F,getSize:kt}),ln=e_(D,rt,Ze,y,ne,V,v,Re),Yt=null;i&&(Yt=Z(K({},c?"height":"maxHeight",i),p_),H&&(Yt.overflowY="hidden",y&&(Yt.overflowX="hidden"),le&&(Yt.pointerEvents="none")));var un={};return F&&(un.dir="rtl"),d.createElement("div",$e({ref:q,style:Z(Z({},u),{},{position:"relative"}),className:U},un,M),d.createElement(qo,{onResize:ze},d.createElement(C,{className:"".concat(r,"-holder"),style:Yt,ref:W,onScroll:_e,onMouseEnter:dt},d.createElement(oP,{prefixCls:r,height:et,offsetX:ne,offsetY:Ve,scrollWidth:y,onInnerResize:z,ref:G,innerProps:O,rtl:F,extra:Kt},ln))),L&&et>i&&d.createElement(nk,{ref:Me,prefixCls:r,scrollOffset:Q,scrollRange:et,rtl:F,onScroll:wt,onStartMove:Oe,onStopMove:ge,spinSize:St,containerSize:Fe.height,style:T==null?void 0:T.verticalScrollBar,thumbStyle:T==null?void 0:T.verticalScrollBarThumb}),L&&y>Fe.width&&d.createElement(nk,{ref:Pe,prefixCls:r,scrollOffset:ne,scrollRange:y,rtl:F,onScroll:wt,onStartMove:Oe,onStopMove:ge,spinSize:Ke,containerSize:Fe.width,horizontal:!0,style:T==null?void 0:T.horizontalScrollBar,thumbStyle:T==null?void 0:T.horizontalScrollBarThumb}))}var qv=d.forwardRef(v_);qv.displayName="List";function h_(){return/(mac\sos|macintosh)/i.test(navigator.appVersion)}var g_=["disabled","title","children","style","className"];function ok(e){return typeof e=="string"||typeof e=="number"}var m_=function(t,n){var r=XT(),o=r.prefixCls,i=r.id,a=r.open,s=r.multiple,c=r.mode,u=r.searchValue,p=r.toggleOpen,v=r.notFoundContent,h=r.onPopupScroll,m=d.useContext(yw),b=m.maxCount,y=m.flattenOptions,w=m.onActiveValue,C=m.defaultActiveFirstOption,S=m.onSelect,E=m.menuItemSelectedIcon,k=m.rawValues,O=m.fieldNames,$=m.virtual,T=m.direction,M=m.listHeight,P=m.listItemHeight,R=m.optionRender,A="".concat(o,"-item"),V=Ls(function(){return y},[a,y],function(se,ne){return ne[0]&&se[1]!==ne[1]}),z=d.useRef(null),B=d.useMemo(function(){return s&&kb(b)&&(k==null?void 0:k.size)>=b},[s,b,k==null?void 0:k.size]),_=function(ne){ne.preventDefault()},H=function(ne){var ae;(ae=z.current)===null||ae===void 0||ae.scrollTo(typeof ne=="number"?{index:ne}:ne)},j=function(ne){for(var ae=arguments.length>1&&arguments[1]!==void 0?arguments[1]:1,ee=V.length,re=0;re1&&arguments[1]!==void 0?arguments[1]:!1;D(ne);var ee={source:ae?"keyboard":"mouse"},re=V[ne];if(!re){w(null,-1,ee);return}w(re.value,ne,ee)};d.useEffect(function(){W(C!==!1?j(0):-1)},[V.length,u]);var G=d.useCallback(function(se){return k.has(se)&&c!=="combobox"},[c,Se(k).toString(),k.size]);d.useEffect(function(){var se=setTimeout(function(){if(!s&&a&&k.size===1){var ae=Array.from(k)[0],ee=V.findIndex(function(re){var le=re.data;return le.value===ae});ee!==-1&&(W(ee),H(ee))}});if(a){var ne;(ne=z.current)===null||ne===void 0||ne.scrollTo(void 0)}return function(){return clearTimeout(se)}},[a,u]);var q=function(ne){ne!==void 0&&S(ne,{selected:!k.has(ne)}),s||p(!1)};if(d.useImperativeHandle(n,function(){return{onKeyDown:function(ne){var ae=ne.which,ee=ne.ctrlKey;switch(ae){case De.N:case De.P:case De.UP:case De.DOWN:var re=0;if(ae===De.UP?re=-1:ae===De.DOWN?re=1:h_()&&ee&&(ae===De.N?re=1:ae===De.P&&(re=-1)),re!==0){var le=j(U+re,re);H(le),W(le,!0)}break;case De.ENTER:var pe,Oe=V[U];Oe&&!(Oe!=null&&(pe=Oe.data)!==null&&pe!==void 0&&pe.disabled)&&!B?q(Oe.value):q(void 0),a&&ne.preventDefault();break;case De.ESC:p(!1),a&&ne.stopPropagation()}},onKeyUp:function(){},scrollTo:function(ne){H(ne)}}}),V.length===0)return d.createElement("div",{role:"listbox",id:"".concat(i,"_list"),className:"".concat(A,"-empty"),onMouseDown:_},v);var J=Object.keys(O).map(function(se){return O[se]}),Y=function(ne){return ne.label};function Q(se,ne){var ae=se.group;return{role:ae?"presentation":"option",id:"".concat(i,"_list_").concat(ne)}}var te=function(ne){var ae=V[ne];if(!ae)return null;var ee=ae.data||{},re=ee.value,le=ae.group,pe=Gr(ee,!0),Oe=Y(ae);return ae?d.createElement("div",$e({"aria-label":typeof Oe=="string"&&!le?Oe:null},pe,{key:ne},Q(ae,ne),{"aria-selected":G(re)}),re):null},ce={role:"listbox",id:"".concat(i,"_list")};return d.createElement(d.Fragment,null,$&&d.createElement("div",$e({},ce,{style:{height:0,width:0,overflow:"hidden"}}),te(U-1),te(U),te(U+1)),d.createElement(qv,{itemKey:"key",ref:z,data:V,height:M,itemHeight:P,fullHeight:!1,onMouseDown:_,onScroll:h,virtual:$,direction:T,innerProps:$?null:ce},function(se,ne){var ae=se.group,ee=se.groupOption,re=se.data,le=se.label,pe=se.value,Oe=re.key;if(ae){var ge,Re=(ge=re.title)!==null&&ge!==void 0?ge:ok(le)?le.toString():void 0;return d.createElement("div",{className:ie(A,"".concat(A,"-group"),re.className),title:Re},le!==void 0?le:Oe)}var ye=re.disabled,Te=re.title;re.children;var Ae=re.style,me=re.className,Ie=Mt(re,g_),Le=Ln(Ie,J),Be=G(pe),et=ye||!Be&&B,rt="".concat(A,"-option"),Ze=ie(A,rt,me,K(K(K(K({},"".concat(rt,"-grouped"),ee),"".concat(rt,"-active"),U===ne&&!et),"".concat(rt,"-disabled"),et),"".concat(rt,"-selected"),Be)),Ve=Y(se),Ye=!E||typeof E=="function"||Be,Ge=typeof Ve=="number"?Ve:Ve||pe,Fe=ok(Ge)?Ge.toString():void 0;return Te!==void 0&&(Fe=Te),d.createElement("div",$e({},Gr(Le),$?{}:Q(se,ne),{"aria-selected":Be,className:Ze,title:Fe,onMouseMove:function(){U===ne||et||W(ne)},onClick:function(){et||q(pe)},style:Ae}),d.createElement("div",{className:"".concat(rt,"-content")},typeof R=="function"?R(se,{index:ne}):Ge),d.isValidElement(E)||Be,Ye&&d.createElement(Uv,{className:"".concat(A,"-option-state"),customizeIcon:E,customizeIconProps:{value:pe,disabled:et,isSelected:Be}},Be?"✓":null))}))},b_=d.forwardRef(m_);const y_=function(e,t){var n=d.useRef({values:new Map,options:new Map}),r=d.useMemo(function(){var i=n.current,a=i.values,s=i.options,c=e.map(function(v){if(v.label===void 0){var h;return Z(Z({},v),{},{label:(h=a.get(v.value))===null||h===void 0?void 0:h.label})}return v}),u=new Map,p=new Map;return c.forEach(function(v){u.set(v.value,v),p.set(v.value,t.get(v.value)||s.get(v.value))}),n.current.values=u,n.current.options=p,c},[e,t]),o=d.useCallback(function(i){return t.get(i)||n.current.options.get(i)},[t]);return[r,o]};function Pm(e,t){return eP(e).join("").toUpperCase().includes(t)}const w_=function(e,t,n,r,o){return d.useMemo(function(){if(!n||r===!1)return e;var i=t.options,a=t.label,s=t.value,c=[],u=typeof r=="function",p=n.toUpperCase(),v=u?r:function(m,b){return o?Pm(b[o],p):b[i]?Pm(b[a!=="children"?a:"label"],p):Pm(b[s],p)},h=u?function(m){return Ob(m)}:function(m){return m};return e.forEach(function(m){if(m[i]){var b=v(n,h(m));if(b)c.push(m);else{var y=m[i].filter(function(w){return v(n,h(w))});y.length&&c.push(Z(Z({},m),{},K({},i,y)))}return}v(n,h(m))&&c.push(m)}),c},[e,r,o,n,t])};var ik=0,x_=$r();function S_(){var e;return x_?(e=ik,ik+=1):e="TEST_OR_SSR",e}function aP(e){var t=d.useState(),n=ve(t,2),r=n[0],o=n[1];return d.useEffect(function(){o("rc_select_".concat(S_()))},[]),e||r}var C_=["children","value"],E_=["children"];function k_(e){var t=e,n=t.key,r=t.props,o=r.children,i=r.value,a=Mt(r,C_);return Z({key:n,value:i!==void 0?i:n,children:o},a)}function sP(e){var t=arguments.length>1&&arguments[1]!==void 0?arguments[1]:!1;return lo(e).map(function(n,r){if(!d.isValidElement(n)||!n.type)return null;var o=n,i=o.type.isSelectOptGroup,a=o.key,s=o.props,c=s.children,u=Mt(s,E_);return t||!i?k_(n):Z(Z({key:"__RC_SELECT_GRP__".concat(a===null?r:a,"__"),label:a},u),{},{options:sP(c)})}).filter(function(n){return n})}var O_=function(t,n,r,o,i){return d.useMemo(function(){var a=t,s=!t;s&&(a=sP(n));var c=new Map,u=new Map,p=function(m,b,y){y&&typeof y=="string"&&m.set(b[y],b)},v=function h(m){for(var b=arguments.length>1&&arguments[1]!==void 0?arguments[1]:!1,y=0;y0?_e(at.options):at.options}):at})},Fe=d.useMemo(function(){return S?Ge(Ye):Ye},[Ye,S,se]),we=d.useMemo(function(){return XF(Fe,{fieldNames:Q,childrenAsData:J})},[Fe,Q,J]),ze=function(qe){var ot=pe(qe);if(ye(ot),U&&(ot.length!==Ie.length||ot.some(function(_t,pt){var dt;return((dt=Ie[pt])===null||dt===void 0?void 0:dt.value)!==(_t==null?void 0:_t.value)}))){var at=F?ot:ot.map(function(_t){return _t.value}),xt=ot.map(function(_t){return Ob(Le(_t.value))});U(q?at:at[0],q?xt:xt[0])}},Me=d.useState(null),Pe=ve(Me,2),Ke=Pe[0],St=Pe[1],Ft=d.useState(0),Lt=ve(Ft,2),Ct=Lt[0],Xt=Lt[1],Pt=M!==void 0?M:r!=="combobox",Gt=d.useCallback(function(_e,qe){var ot=arguments.length>2&&arguments[2]!==void 0?arguments[2]:{},at=ot.source,xt=at===void 0?"keyboard":at;Xt(qe),a&&r==="combobox"&&_e!==null&&xt==="keyboard"&&St(String(_e))},[a,r]),ft=function(qe,ot,at){var xt=function(){var ut,lt=Le(qe);return[F?{label:lt==null?void 0:lt[Q.label],value:qe,key:(ut=lt==null?void 0:lt.key)!==null&&ut!==void 0?ut:qe}:qe,Ob(lt)]};if(ot&&m){var _t=xt(),pt=ve(_t,2),dt=pt[0],$t=pt[1];m(dt,$t)}else if(!ot&&b&&at!=="clear"){var kt=xt(),Kt=ve(kt,2),ln=Kt[0],Yt=Kt[1];b(ln,Yt)}},Je=ak(function(_e,qe){var ot,at=q?qe.selected:!0;at?ot=q?[].concat(Se(Ie),[_e]):[_e]:ot=Ie.filter(function(xt){return xt.value!==_e}),ze(ot),ft(_e,at),r==="combobox"?St(""):(!$b||h)&&(ne(""),St(""))}),He=function(qe,ot){ze(qe);var at=ot.type,xt=ot.values;(at==="remove"||at==="clear")&&xt.forEach(function(_t){ft(_t.value,!1,at)})},We=function(qe,ot){if(ne(qe),St(null),ot.source==="submit"){var at=(qe||"").trim();if(at){var xt=Array.from(new Set([].concat(Se(et),[at])));ze(xt),ft(at,!0),ne("")}return}ot.source!=="blur"&&(r==="combobox"&&ze(qe),p==null||p(qe))},Et=function(qe){var ot=qe;r!=="tags"&&(ot=qe.map(function(xt){var _t=re.get(xt);return _t==null?void 0:_t.value}).filter(function(xt){return xt!==void 0}));var at=Array.from(new Set([].concat(Se(et),Se(ot))));ze(at),at.forEach(function(xt){ft(xt,!0)})},wt=d.useMemo(function(){var _e=R!==!1&&w!==!1;return Z(Z({},ae),{},{flattenOptions:we,onActiveValue:Gt,defaultActiveFirstOption:Pt,onSelect:Je,menuItemSelectedIcon:P,rawValues:et,fieldNames:Q,virtual:_e,direction:A,listHeight:z,listItemHeight:_,childrenAsData:J,maxCount:D,optionRender:$})},[D,ae,we,Gt,Pt,Je,P,et,Q,R,w,A,z,_,J,$]);return d.createElement(yw.Provider,{value:wt},d.createElement(rP,$e({},W,{id:G,prefixCls:i,ref:t,omitDomProps:I_,mode:r,displayValues:Be,onDisplayValuesChange:He,direction:A,searchValue:se,onSearch:We,autoClearSearchValue:h,onSearchSplit:Et,dropdownMatchSelectWidth:w,OptionList:b_,emptyOptions:!we.length,activeValue:Ke,activeDescendantId:"".concat(G,"_list_").concat(Ct)})))}),Sw=P_;Sw.Option=xw;Sw.OptGroup=ww;function od(e,t,n){return ie({[`${e}-status-success`]:t==="success",[`${e}-status-warning`]:t==="warning",[`${e}-status-error`]:t==="error",[`${e}-status-validating`]:t==="validating",[`${e}-has-feedback`]:n})}const $d=(e,t)=>t||e,M_=()=>{const[,e]=Ir(),[t]=bi("Empty"),r=new xn(e.colorBgBase).toHsl().l<.5?{opacity:.65}:{};return d.createElement("svg",{style:r,width:"184",height:"152",viewBox:"0 0 184 152",xmlns:"http://www.w3.org/2000/svg"},d.createElement("title",null,(t==null?void 0:t.description)||"Empty"),d.createElement("g",{fill:"none",fillRule:"evenodd"},d.createElement("g",{transform:"translate(24 31.67)"},d.createElement("ellipse",{fillOpacity:".8",fill:"#F5F5F7",cx:"67.797",cy:"106.89",rx:"67.797",ry:"12.668"}),d.createElement("path",{d:"M122.034 69.674L98.109 40.229c-1.148-1.386-2.826-2.225-4.593-2.225h-51.44c-1.766 0-3.444.839-4.592 2.225L13.56 69.674v15.383h108.475V69.674z",fill:"#AEB8C2"}),d.createElement("path",{d:"M101.537 86.214L80.63 61.102c-1.001-1.207-2.507-1.867-4.048-1.867H31.724c-1.54 0-3.047.66-4.048 1.867L6.769 86.214v13.792h94.768V86.214z",fill:"url(#linearGradient-1)",transform:"translate(13.56)"}),d.createElement("path",{d:"M33.83 0h67.933a4 4 0 0 1 4 4v93.344a4 4 0 0 1-4 4H33.83a4 4 0 0 1-4-4V4a4 4 0 0 1 4-4z",fill:"#F5F5F7"}),d.createElement("path",{d:"M42.678 9.953h50.237a2 2 0 0 1 2 2V36.91a2 2 0 0 1-2 2H42.678a2 2 0 0 1-2-2V11.953a2 2 0 0 1 2-2zM42.94 49.767h49.713a2.262 2.262 0 1 1 0 4.524H42.94a2.262 2.262 0 0 1 0-4.524zM42.94 61.53h49.713a2.262 2.262 0 1 1 0 4.525H42.94a2.262 2.262 0 0 1 0-4.525zM121.813 105.032c-.775 3.071-3.497 5.36-6.735 5.36H20.515c-3.238 0-5.96-2.29-6.734-5.36a7.309 7.309 0 0 1-.222-1.79V69.675h26.318c2.907 0 5.25 2.448 5.25 5.42v.04c0 2.971 2.37 5.37 5.277 5.37h34.785c2.907 0 5.277-2.421 5.277-5.393V75.1c0-2.972 2.343-5.426 5.25-5.426h26.318v33.569c0 .617-.077 1.216-.221 1.789z",fill:"#DCE0E6"})),d.createElement("path",{d:"M149.121 33.292l-6.83 2.65a1 1 0 0 1-1.317-1.23l1.937-6.207c-2.589-2.944-4.109-6.534-4.109-10.408C138.802 8.102 148.92 0 161.402 0 173.881 0 184 8.102 184 18.097c0 9.995-10.118 18.097-22.599 18.097-4.528 0-8.744-1.066-12.28-2.902z",fill:"#DCE0E6"}),d.createElement("g",{transform:"translate(149.65 15.383)",fill:"#FFF"},d.createElement("ellipse",{cx:"20.654",cy:"3.167",rx:"2.849",ry:"2.815"}),d.createElement("path",{d:"M5.698 5.63H0L2.898.704zM9.259.704h4.985V5.63H9.259z"}))))},N_=()=>{const[,e]=Ir(),[t]=bi("Empty"),{colorFill:n,colorFillTertiary:r,colorFillQuaternary:o,colorBgContainer:i}=e,{borderColor:a,shadowColor:s,contentColor:c}=d.useMemo(()=>({borderColor:new xn(n).onBackground(i).toHexShortString(),shadowColor:new xn(r).onBackground(i).toHexShortString(),contentColor:new xn(o).onBackground(i).toHexShortString()}),[n,r,o,i]);return d.createElement("svg",{width:"64",height:"41",viewBox:"0 0 64 41",xmlns:"http://www.w3.org/2000/svg"},d.createElement("title",null,(t==null?void 0:t.description)||"Empty"),d.createElement("g",{transform:"translate(0 1)",fill:"none",fillRule:"evenodd"},d.createElement("ellipse",{fill:s,cx:"32",cy:"33",rx:"32",ry:"7"}),d.createElement("g",{fillRule:"nonzero",stroke:a},d.createElement("path",{d:"M55 12.76L44.854 1.258C44.367.474 43.656 0 42.907 0H21.093c-.749 0-1.46.474-1.947 1.257L9 12.761V22h46v-9.24z"}),d.createElement("path",{d:"M41.613 15.931c0-1.605.994-2.93 2.227-2.931H55v18.137C55 33.26 53.68 35 52.05 35h-40.1C10.32 35 9 33.259 9 31.137V13h11.16c1.233 0 2.227 1.323 2.227 2.928v.022c0 1.605 1.005 2.901 2.237 2.901h14.752c1.232 0 2.237-1.308 2.237-2.913v-.007z",fill:c}))))},R_=e=>{const{componentCls:t,margin:n,marginXS:r,marginXL:o,fontSize:i,lineHeight:a}=e;return{[t]:{marginInline:r,fontSize:i,lineHeight:a,textAlign:"center",[`${t}-image`]:{height:e.emptyImgHeight,marginBottom:r,opacity:e.opacityImage,img:{height:"100%"},svg:{maxWidth:"100%",height:"100%",margin:"auto"}},[`${t}-description`]:{color:e.colorTextDescription},[`${t}-footer`]:{marginTop:n},"&-normal":{marginBlock:o,color:e.colorTextDescription,[`${t}-description`]:{color:e.colorTextDescription},[`${t}-image`]:{height:e.emptyImgHeightMD}},"&-small":{marginBlock:r,color:e.colorTextDescription,[`${t}-image`]:{height:e.emptyImgHeightSM}}}}},D_=In("Empty",e=>{const{componentCls:t,controlHeightLG:n,calc:r}=e,o=vn(e,{emptyImgCls:`${t}-img`,emptyImgHeight:r(n).mul(2.5).equal(),emptyImgHeightMD:n,emptyImgHeightSM:r(n).mul(.875).equal()});return[R_(o)]});var j_=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{var{className:t,rootClassName:n,prefixCls:r,image:o=lP,description:i,children:a,imageStyle:s,style:c}=e,u=j_(e,["className","rootClassName","prefixCls","image","description","children","imageStyle","style"]);const{getPrefixCls:p,direction:v,empty:h}=d.useContext(ht),m=p("empty",r),[b,y,w]=D_(m),[C]=bi("Empty"),S=typeof i<"u"?i:C==null?void 0:C.description,E=typeof S=="string"?S:"empty";let k=null;return typeof o=="string"?k=d.createElement("img",{alt:E,src:o}):k=o,b(d.createElement("div",Object.assign({className:ie(y,w,m,h==null?void 0:h.className,{[`${m}-normal`]:o===cP,[`${m}-rtl`]:v==="rtl"},t,n),style:Object.assign(Object.assign({},h==null?void 0:h.style),c)},u),d.createElement("div",{className:`${m}-image`,style:s},k),S&&d.createElement("div",{className:`${m}-description`},S),a&&d.createElement("div",{className:`${m}-footer`},a)))};Ji.PRESENTED_IMAGE_DEFAULT=lP;Ji.PRESENTED_IMAGE_SIMPLE=cP;const Xv=e=>{const{componentName:t}=e,{getPrefixCls:n}=d.useContext(ht),r=n("empty");switch(t){case"Table":case"List":return ue.createElement(Ji,{image:Ji.PRESENTED_IMAGE_SIMPLE});case"Select":case"TreeSelect":case"Cascader":case"Transfer":case"Mentions":return ue.createElement(Ji,{image:Ji.PRESENTED_IMAGE_SIMPLE,className:`${r}-small`});case"Table.filter":return null;default:return ue.createElement(Ji,null)}},Gv=function(e,t){let n=arguments.length>2&&arguments[2]!==void 0?arguments[2]:void 0;var r,o;const{variant:i,[e]:a}=d.useContext(ht),s=d.useContext($T),c=a==null?void 0:a.variant;let u;typeof t<"u"?u=t:n===!1?u="borderless":u=(o=(r=s??c)!==null&&r!==void 0?r:i)!==null&&o!==void 0?o:"outlined";const p=zB.includes(u);return[u,p]},L_=e=>{const n={overflow:{adjustX:!0,adjustY:!0,shiftY:!0},htmlRegion:e==="scroll"?"scroll":"visible",dynamicInset:!0};return{bottomLeft:Object.assign(Object.assign({},n),{points:["tl","bl"],offset:[0,4]}),bottomRight:Object.assign(Object.assign({},n),{points:["tr","br"],offset:[0,4]}),topLeft:Object.assign(Object.assign({},n),{points:["bl","tl"],offset:[0,-4]}),topRight:Object.assign(Object.assign({},n),{points:["br","tr"],offset:[0,-4]})}};function uP(e,t){return e||L_(t)}const sk=e=>{const{optionHeight:t,optionFontSize:n,optionLineHeight:r,optionPadding:o}=e;return{position:"relative",display:"block",minHeight:t,padding:o,color:e.colorText,fontWeight:"normal",fontSize:n,lineHeight:r,boxSizing:"border-box"}},B_=e=>{const{antCls:t,componentCls:n}=e,r=`${n}-item`,o=`&${t}-slide-up-enter${t}-slide-up-enter-active`,i=`&${t}-slide-up-appear${t}-slide-up-appear-active`,a=`&${t}-slide-up-leave${t}-slide-up-leave-active`,s=`${n}-dropdown-placement-`;return[{[`${n}-dropdown`]:Object.assign(Object.assign({},jn(e)),{position:"absolute",top:-9999,zIndex:e.zIndexPopup,boxSizing:"border-box",padding:e.paddingXXS,overflow:"hidden",fontSize:e.fontSize,fontVariant:"initial",backgroundColor:e.colorBgElevated,borderRadius:e.borderRadiusLG,outline:"none",boxShadow:e.boxShadowSecondary,[` - ${o}${s}bottomLeft, - ${i}${s}bottomLeft - `]:{animationName:tw},[` - ${o}${s}topLeft, - ${i}${s}topLeft, - ${o}${s}topRight, - ${i}${s}topRight - `]:{animationName:rw},[`${a}${s}bottomLeft`]:{animationName:nw},[` - ${a}${s}topLeft, - ${a}${s}topRight - `]:{animationName:ow},"&-hidden":{display:"none"},[r]:Object.assign(Object.assign({},sk(e)),{cursor:"pointer",transition:`background ${e.motionDurationSlow} ease`,borderRadius:e.borderRadiusSM,"&-group":{color:e.colorTextDescription,fontSize:e.fontSizeSM,cursor:"default"},"&-option":{display:"flex","&-content":Object.assign({flex:"auto"},Ka),"&-state":{flex:"none",display:"flex",alignItems:"center"},[`&-active:not(${r}-option-disabled)`]:{backgroundColor:e.optionActiveBg},[`&-selected:not(${r}-option-disabled)`]:{color:e.optionSelectedColor,fontWeight:e.optionSelectedFontWeight,backgroundColor:e.optionSelectedBg,[`${r}-option-state`]:{color:e.colorPrimary},[`&:has(+ ${r}-option-selected:not(${r}-option-disabled))`]:{borderEndStartRadius:0,borderEndEndRadius:0,[`& + ${r}-option-selected:not(${r}-option-disabled)`]:{borderStartStartRadius:0,borderStartEndRadius:0}}},"&-disabled":{[`&${r}-option-selected`]:{backgroundColor:e.colorBgContainerDisabled},color:e.colorTextDisabled,cursor:"not-allowed"},"&-grouped":{paddingInlineStart:e.calc(e.controlPaddingHorizontal).mul(2).equal()}},"&-empty":Object.assign(Object.assign({},sk(e)),{color:e.colorTextDisabled})}),"&-rtl":{direction:"rtl"}})},Gl(e,"slide-up"),Gl(e,"slide-down"),Qp(e,"move-up"),Qp(e,"move-down")]},A_=e=>{const{multipleSelectItemHeight:t,paddingXXS:n,lineWidth:r,INTERNAL_FIXED_ITEM_MARGIN:o}=e,i=e.max(e.calc(n).sub(r).equal(),0),a=e.max(e.calc(i).sub(o).equal(),0);return{basePadding:i,containerPadding:a,itemHeight:de(t),itemLineHeight:de(e.calc(t).sub(e.calc(e.lineWidth).mul(2)).equal())}},z_=e=>{const{multipleSelectItemHeight:t,selectHeight:n,lineWidth:r}=e;return e.calc(n).sub(t).div(2).sub(r).equal()},H_=e=>{const{componentCls:t,iconCls:n,borderRadiusSM:r,motionDurationSlow:o,paddingXS:i,multipleItemColorDisabled:a,multipleItemBorderColorDisabled:s,colorIcon:c,colorIconHover:u,INTERNAL_FIXED_ITEM_MARGIN:p}=e;return{[`${t}-selection-overflow`]:{position:"relative",display:"flex",flex:"auto",flexWrap:"wrap",maxWidth:"100%","&-item":{flex:"none",alignSelf:"center",maxWidth:"100%",display:"inline-flex"},[`${t}-selection-item`]:{display:"flex",alignSelf:"center",flex:"none",boxSizing:"border-box",maxWidth:"100%",marginBlock:p,borderRadius:r,cursor:"default",transition:`font-size ${o}, line-height ${o}, height ${o}`,marginInlineEnd:e.calc(p).mul(2).equal(),paddingInlineStart:i,paddingInlineEnd:e.calc(i).div(2).equal(),[`${t}-disabled&`]:{color:a,borderColor:s,cursor:"not-allowed"},"&-content":{display:"inline-block",marginInlineEnd:e.calc(i).div(2).equal(),overflow:"hidden",whiteSpace:"pre",textOverflow:"ellipsis"},"&-remove":Object.assign(Object.assign({},Mv()),{display:"inline-flex",alignItems:"center",color:c,fontWeight:"bold",fontSize:10,lineHeight:"inherit",cursor:"pointer",[`> ${n}`]:{verticalAlign:"-0.2em"},"&:hover":{color:u}})}}}},F_=(e,t)=>{const{componentCls:n,INTERNAL_FIXED_ITEM_MARGIN:r}=e,o=`${n}-selection-overflow`,i=e.multipleSelectItemHeight,a=z_(e),s=t?`${n}-${t}`:"",c=A_(e);return{[`${n}-multiple${s}`]:Object.assign(Object.assign({},H_(e)),{[`${n}-selector`]:{display:"flex",flexWrap:"wrap",alignItems:"center",height:"100%",paddingInline:c.basePadding,paddingBlock:c.containerPadding,borderRadius:e.borderRadius,[`${n}-disabled&`]:{background:e.multipleSelectorBgDisabled,cursor:"not-allowed"},"&:after":{display:"inline-block",width:0,margin:`${de(r)} 0`,lineHeight:de(i),visibility:"hidden",content:'"\\a0"'}},[`${n}-selection-item`]:{height:c.itemHeight,lineHeight:de(c.itemLineHeight)},[`${o}-item + ${o}-item`]:{[`${n}-selection-search`]:{marginInlineStart:0}},[`${o}-item-suffix`]:{height:"100%"},[`${n}-selection-search`]:{display:"inline-flex",position:"relative",maxWidth:"100%",marginInlineStart:e.calc(e.inputPaddingHorizontalBase).sub(a).equal(),"\n &-input,\n &-mirror\n ":{height:i,fontFamily:e.fontFamily,lineHeight:de(i),transition:`all ${e.motionDurationSlow}`},"&-input":{width:"100%",minWidth:4.1},"&-mirror":{position:"absolute",top:0,insetInlineStart:0,insetInlineEnd:"auto",zIndex:999,whiteSpace:"pre",visibility:"hidden"}},[`${n}-selection-placeholder`]:{position:"absolute",top:"50%",insetInlineStart:e.inputPaddingHorizontalBase,insetInlineEnd:e.inputPaddingHorizontalBase,transform:"translateY(-50%)",transition:`all ${e.motionDurationSlow}`}})}};function Mm(e,t){const{componentCls:n}=e,r=t?`${n}-${t}`:"",o={[`${n}-multiple${r}`]:{fontSize:e.fontSize,[`${n}-selector`]:{[`${n}-show-search&`]:{cursor:"text"}},[` - &${n}-show-arrow ${n}-selector, - &${n}-allow-clear ${n}-selector - `]:{paddingInlineEnd:e.calc(e.fontSizeIcon).add(e.controlPaddingHorizontal).equal()}}};return[F_(e,t),o]}const __=e=>{const{componentCls:t}=e,n=vn(e,{selectHeight:e.controlHeightSM,multipleSelectItemHeight:e.multipleItemHeightSM,borderRadius:e.borderRadiusSM,borderRadiusSM:e.borderRadiusXS}),r=vn(e,{fontSize:e.fontSizeLG,selectHeight:e.controlHeightLG,multipleSelectItemHeight:e.multipleItemHeightLG,borderRadius:e.borderRadiusLG,borderRadiusSM:e.borderRadius});return[Mm(e),Mm(n,"sm"),{[`${t}-multiple${t}-sm`]:{[`${t}-selection-placeholder`]:{insetInline:e.calc(e.controlPaddingHorizontalSM).sub(e.lineWidth).equal()},[`${t}-selection-search`]:{marginInlineStart:2}}},Mm(r,"lg")]};function Nm(e,t){const{componentCls:n,inputPaddingHorizontalBase:r,borderRadius:o,fontSizeIcon:i}=e,a=e.calc(e.controlHeight).sub(e.calc(e.lineWidth).mul(2)).equal(),s=e.calc(r).add(i).equal(),c=t?`${n}-${t}`:"";return{[`${n}-single${c}`]:{fontSize:e.fontSize,height:e.controlHeight,[`${n}-selector`]:Object.assign(Object.assign({},jn(e,!0)),{display:"flex",borderRadius:o,[`${n}-selection-search`]:{position:"absolute",top:0,insetInlineStart:r,insetInlineEnd:de(s),bottom:0,"&-input":{width:"100%",WebkitAppearance:"textfield"}},[` - ${n}-selection-item, - ${n}-selection-placeholder - `]:{padding:0,lineHeight:de(a),transition:`all ${e.motionDurationSlow}, visibility 0s`,alignSelf:"center"},[`${n}-selection-placeholder`]:{transition:"none",pointerEvents:"none"},[["&:after",`${n}-selection-item:empty:after`,`${n}-selection-placeholder:empty:after`].join(",")]:{display:"inline-block",width:0,visibility:"hidden",content:'"\\a0"'}}),[` - &${n}-show-arrow ${n}-selection-item, - &${n}-show-arrow ${n}-selection-placeholder - `]:{paddingInlineEnd:e.showArrowPaddingInlineEnd},[`&${n}-open ${n}-selection-item`]:{color:e.colorTextPlaceholder},[`&:not(${n}-customize-input)`]:{[`${n}-selector`]:{width:"100%",height:"100%",padding:`0 ${de(r)}`,[`${n}-selection-search-input`]:{height:a},"&:after":{lineHeight:de(a)}}},[`&${n}-customize-input`]:{[`${n}-selector`]:{"&:after":{display:"none"},[`${n}-selection-search`]:{position:"static",width:"100%"},[`${n}-selection-placeholder`]:{position:"absolute",insetInlineStart:0,insetInlineEnd:0,padding:`0 ${de(r)}`,"&:after":{display:"none"}}}}}}}function V_(e){const{componentCls:t}=e,n=e.calc(e.controlPaddingHorizontalSM).sub(e.lineWidth).equal();return[Nm(e),Nm(vn(e,{controlHeight:e.controlHeightSM,borderRadius:e.borderRadiusSM}),"sm"),{[`${t}-single${t}-sm`]:{[`&:not(${t}-customize-input)`]:{[`${t}-selection-search`]:{insetInlineStart:n,insetInlineEnd:n},[`${t}-selector`]:{padding:`0 ${de(n)}`},[`&${t}-show-arrow ${t}-selection-search`]:{insetInlineEnd:e.calc(n).add(e.calc(e.fontSize).mul(1.5)).equal()},[` - &${t}-show-arrow ${t}-selection-item, - &${t}-show-arrow ${t}-selection-placeholder - `]:{paddingInlineEnd:e.calc(e.fontSize).mul(1.5).equal()}}}},Nm(vn(e,{controlHeight:e.singleItemHeightLG,fontSize:e.fontSizeLG,borderRadius:e.borderRadiusLG}),"lg")]}const W_=e=>{const{fontSize:t,lineHeight:n,lineWidth:r,controlHeight:o,controlHeightSM:i,controlHeightLG:a,paddingXXS:s,controlPaddingHorizontal:c,zIndexPopupBase:u,colorText:p,fontWeightStrong:v,controlItemBgActive:h,controlItemBgHover:m,colorBgContainer:b,colorFillSecondary:y,colorBgContainerDisabled:w,colorTextDisabled:C,colorPrimaryHover:S,colorPrimary:E,controlOutline:k}=e,O=s*2,$=r*2,T=Math.min(o-O,o-$),M=Math.min(i-O,i-$),P=Math.min(a-O,a-$);return{INTERNAL_FIXED_ITEM_MARGIN:Math.floor(s/2),zIndexPopup:u+50,optionSelectedColor:p,optionSelectedFontWeight:v,optionSelectedBg:h,optionActiveBg:m,optionPadding:`${(o-t*n)/2}px ${c}px`,optionFontSize:t,optionLineHeight:n,optionHeight:o,selectorBg:b,clearBg:b,singleItemHeightLG:a,multipleItemBg:y,multipleItemBorderColor:"transparent",multipleItemHeight:T,multipleItemHeightSM:M,multipleItemHeightLG:P,multipleSelectorBgDisabled:w,multipleItemColorDisabled:C,multipleItemBorderColorDisabled:"transparent",showArrowPaddingInlineEnd:Math.ceil(e.fontSize*1.25),hoverBorderColor:S,activeBorderColor:E,activeOutlineColor:k}},dP=(e,t)=>{const{componentCls:n,antCls:r,controlOutlineWidth:o}=e;return{[`&:not(${n}-customize-input) ${n}-selector`]:{border:`${de(e.lineWidth)} ${e.lineType} ${t.borderColor}`,background:e.selectorBg},[`&:not(${n}-disabled):not(${n}-customize-input):not(${r}-pagination-size-changer)`]:{[`&:hover ${n}-selector`]:{borderColor:t.hoverBorderHover},[`${n}-focused& ${n}-selector`]:{borderColor:t.activeBorderColor,boxShadow:`0 0 0 ${de(o)} ${t.activeOutlineColor}`,outline:0}}}},lk=(e,t)=>({[`&${e.componentCls}-status-${t.status}`]:Object.assign({},dP(e,t))}),U_=e=>({"&-outlined":Object.assign(Object.assign(Object.assign(Object.assign({},dP(e,{borderColor:e.colorBorder,hoverBorderHover:e.hoverBorderColor,activeBorderColor:e.activeBorderColor,activeOutlineColor:e.activeOutlineColor})),lk(e,{status:"error",borderColor:e.colorError,hoverBorderHover:e.colorErrorHover,activeBorderColor:e.colorError,activeOutlineColor:e.colorErrorOutline})),lk(e,{status:"warning",borderColor:e.colorWarning,hoverBorderHover:e.colorWarningHover,activeBorderColor:e.colorWarning,activeOutlineColor:e.colorWarningOutline})),{[`&${e.componentCls}-disabled`]:{[`&:not(${e.componentCls}-customize-input) ${e.componentCls}-selector`]:{background:e.colorBgContainerDisabled,color:e.colorTextDisabled}},[`&${e.componentCls}-multiple ${e.componentCls}-selection-item`]:{background:e.multipleItemBg,border:`${de(e.lineWidth)} ${e.lineType} ${e.multipleItemBorderColor}`}})}),fP=(e,t)=>{const{componentCls:n,antCls:r}=e;return{[`&:not(${n}-customize-input) ${n}-selector`]:{background:t.bg,border:`${de(e.lineWidth)} ${e.lineType} transparent`,color:t.color},[`&:not(${n}-disabled):not(${n}-customize-input):not(${r}-pagination-size-changer)`]:{[`&:hover ${n}-selector`]:{background:t.hoverBg},[`${n}-focused& ${n}-selector`]:{background:e.selectorBg,borderColor:t.activeBorderColor,outline:0}}}},ck=(e,t)=>({[`&${e.componentCls}-status-${t.status}`]:Object.assign({},fP(e,t))}),K_=e=>({"&-filled":Object.assign(Object.assign(Object.assign(Object.assign({},fP(e,{bg:e.colorFillTertiary,hoverBg:e.colorFillSecondary,activeBorderColor:e.activeBorderColor,color:e.colorText})),ck(e,{status:"error",bg:e.colorErrorBg,hoverBg:e.colorErrorBgHover,activeBorderColor:e.colorError,color:e.colorError})),ck(e,{status:"warning",bg:e.colorWarningBg,hoverBg:e.colorWarningBgHover,activeBorderColor:e.colorWarning,color:e.colorWarning})),{[`&${e.componentCls}-disabled`]:{[`&:not(${e.componentCls}-customize-input) ${e.componentCls}-selector`]:{borderColor:e.colorBorder,background:e.colorBgContainerDisabled,color:e.colorTextDisabled}},[`&${e.componentCls}-multiple ${e.componentCls}-selection-item`]:{background:e.colorBgContainer,border:`${de(e.lineWidth)} ${e.lineType} ${e.colorSplit}`}})}),q_=e=>({"&-borderless":{[`${e.componentCls}-selector`]:{background:"transparent",borderColor:"transparent"},[`&${e.componentCls}-disabled`]:{[`&:not(${e.componentCls}-customize-input) ${e.componentCls}-selector`]:{color:e.colorTextDisabled}},[`&${e.componentCls}-multiple ${e.componentCls}-selection-item`]:{background:e.multipleItemBg,border:`${de(e.lineWidth)} ${e.lineType} ${e.multipleItemBorderColor}`},[`&${e.componentCls}-status-error`]:{[`${e.componentCls}-selection-item`]:{color:e.colorError}},[`&${e.componentCls}-status-warning`]:{[`${e.componentCls}-selection-item`]:{color:e.colorWarning}}}}),X_=e=>({[e.componentCls]:Object.assign(Object.assign(Object.assign({},U_(e)),K_(e)),q_(e))}),G_=e=>{const{componentCls:t}=e;return{position:"relative",transition:`all ${e.motionDurationMid} ${e.motionEaseInOut}`,input:{cursor:"pointer"},[`${t}-show-search&`]:{cursor:"text",input:{cursor:"auto",color:"inherit",height:"100%"}},[`${t}-disabled&`]:{cursor:"not-allowed",input:{cursor:"not-allowed"}}}},Y_=e=>{const{componentCls:t}=e;return{[`${t}-selection-search-input`]:{margin:0,padding:0,background:"transparent",border:"none",outline:"none",appearance:"none",fontFamily:"inherit","&::-webkit-search-cancel-button":{display:"none","-webkit-appearance":"none"}}}},Q_=e=>{const{antCls:t,componentCls:n,inputPaddingHorizontalBase:r,iconCls:o}=e;return{[n]:Object.assign(Object.assign({},jn(e)),{position:"relative",display:"inline-block",cursor:"pointer",[`&:not(${n}-customize-input) ${n}-selector`]:Object.assign(Object.assign({},G_(e)),Y_(e)),[`${n}-selection-item`]:Object.assign(Object.assign({flex:1,fontWeight:"normal",position:"relative",userSelect:"none"},Ka),{[`> ${t}-typography`]:{display:"inline"}}),[`${n}-selection-placeholder`]:Object.assign(Object.assign({},Ka),{flex:1,color:e.colorTextPlaceholder,pointerEvents:"none"}),[`${n}-arrow`]:Object.assign(Object.assign({},Mv()),{position:"absolute",top:"50%",insetInlineStart:"auto",insetInlineEnd:r,height:e.fontSizeIcon,marginTop:e.calc(e.fontSizeIcon).mul(-1).div(2).equal(),color:e.colorTextQuaternary,fontSize:e.fontSizeIcon,lineHeight:1,textAlign:"center",pointerEvents:"none",display:"flex",alignItems:"center",transition:`opacity ${e.motionDurationSlow} ease`,[o]:{verticalAlign:"top",transition:`transform ${e.motionDurationSlow}`,"> svg":{verticalAlign:"top"},[`&:not(${n}-suffix)`]:{pointerEvents:"auto"}},[`${n}-disabled &`]:{cursor:"not-allowed"},"> *:not(:last-child)":{marginInlineEnd:8}}),[`${n}-clear`]:{position:"absolute",top:"50%",insetInlineStart:"auto",insetInlineEnd:r,zIndex:1,display:"inline-block",width:e.fontSizeIcon,height:e.fontSizeIcon,marginTop:e.calc(e.fontSizeIcon).mul(-1).div(2).equal(),color:e.colorTextQuaternary,fontSize:e.fontSizeIcon,fontStyle:"normal",lineHeight:1,textAlign:"center",textTransform:"none",cursor:"pointer",opacity:0,transition:`color ${e.motionDurationMid} ease, opacity ${e.motionDurationSlow} ease`,textRendering:"auto","&:before":{display:"block"},"&:hover":{color:e.colorTextTertiary}},[`&:hover ${n}-clear`]:{opacity:1,background:e.colorBgBase,borderRadius:"50%"}}),[`${n}-has-feedback`]:{[`${n}-clear`]:{insetInlineEnd:e.calc(r).add(e.fontSize).add(e.paddingXS).equal()}}}},Z_=e=>{const{componentCls:t}=e;return[{[t]:{[`&${t}-in-form-item`]:{width:"100%"}}},Q_(e),V_(e),__(e),B_(e),{[`${t}-rtl`]:{direction:"rtl"}},_v(e,{borderElCls:`${t}-selector`,focusElCls:`${t}-focused`})]},pP=In("Select",(e,t)=>{let{rootPrefixCls:n}=t;const r=vn(e,{rootPrefixCls:n,inputPaddingHorizontalBase:e.calc(e.paddingSM).sub(1).equal(),multipleSelectItemHeight:e.multipleItemHeight,selectHeight:e.controlHeight});return[Z_(r),X_(r)]},W_,{unitless:{optionLineHeight:!0,optionSelectedFontWeight:!0}});var J_={icon:{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M912 190h-69.9c-9.8 0-19.1 4.5-25.1 12.2L404.7 724.5 207 474a32 32 0 00-25.1-12.2H112c-6.7 0-10.4 7.7-6.3 12.9l273.9 347c12.8 16.2 37.4 16.2 50.3 0l488.4-618.9c4.1-5.1.4-12.8-6.3-12.8z"}}]},name:"check",theme:"outlined"},e7=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:J_}))},Cw=d.forwardRef(e7),t7={icon:{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M884 256h-75c-5.1 0-9.9 2.5-12.9 6.6L512 654.2 227.9 262.6c-3-4.1-7.8-6.6-12.9-6.6h-75c-6.5 0-10.3 7.4-6.5 12.7l352.6 486.1c12.8 17.6 39 17.6 51.7 0l352.6-486.1c3.9-5.3.1-12.7-6.4-12.7z"}}]},name:"down",theme:"outlined"},n7=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:t7}))},vP=d.forwardRef(n7),r7={icon:{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M909.6 854.5L649.9 594.8C690.2 542.7 712 479 712 412c0-80.2-31.3-155.4-87.9-212.1-56.6-56.7-132-87.9-212.1-87.9s-155.5 31.3-212.1 87.9C143.2 256.5 112 331.8 112 412c0 80.1 31.3 155.5 87.9 212.1C256.5 680.8 331.8 712 412 712c67 0 130.6-21.8 182.7-62l259.7 259.6a8.2 8.2 0 0011.6 0l43.6-43.5a8.2 8.2 0 000-11.6zM570.4 570.4C528 612.7 471.8 636 412 636s-116-23.3-158.4-65.6C211.3 528 188 471.8 188 412s23.3-116.1 65.6-158.4C296 211.3 352.2 188 412 188s116.1 23.2 158.4 65.6S636 352.2 636 412s-23.3 116.1-65.6 158.4z"}}]},name:"search",theme:"outlined"},o7=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:r7}))},Ew=d.forwardRef(o7);function hP(e){let{suffixIcon:t,clearIcon:n,menuItemSelectedIcon:r,removeIcon:o,loading:i,multiple:a,hasFeedback:s,prefixCls:c,showSuffixIcon:u,feedbackIcon:p,showArrow:v,componentName:h}=e;const m=n??d.createElement(bd,null),b=S=>t===null&&!s&&!v?null:d.createElement(d.Fragment,null,u!==!1&&S,s&&p);let y=null;if(t!==void 0)y=b(t);else if(i)y=b(d.createElement(Xa,{spin:!0}));else{const S=`${c}-suffix`;y=E=>{let{open:k,showSearch:O}=E;return b(k&&O?d.createElement(Ew,{className:S}):d.createElement(vP,{className:S}))}}let w=null;r!==void 0?w=r:a?w=d.createElement(Cw,null):w=null;let C=null;return o!==void 0?C=o:C=d.createElement(yd,null),{clearIcon:m,suffixIcon:y,itemIcon:w,removeIcon:C}}function gP(e,t){return t!==void 0?t:e!==null}var i7=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{var n;const{prefixCls:r,bordered:o,className:i,rootClassName:a,getPopupContainer:s,popupClassName:c,dropdownClassName:u,listHeight:p=256,placement:v,listItemHeight:h,size:m,disabled:b,notFoundContent:y,status:w,builtinPlacements:C,dropdownMatchSelectWidth:S,popupMatchSelectWidth:E,direction:k,style:O,allowClear:$,variant:T,dropdownStyle:M,transitionName:P,tagRender:R,maxCount:A}=e,V=i7(e,["prefixCls","bordered","className","rootClassName","getPopupContainer","popupClassName","dropdownClassName","listHeight","placement","listItemHeight","size","disabled","notFoundContent","status","builtinPlacements","dropdownMatchSelectWidth","popupMatchSelectWidth","direction","style","allowClear","variant","dropdownStyle","transitionName","tagRender","maxCount"]),{getPopupContainer:z,getPrefixCls:B,renderEmpty:_,direction:H,virtual:j,popupMatchSelectWidth:L,popupOverflow:F,select:U}=d.useContext(ht),[,D]=Ir(),W=h??(D==null?void 0:D.controlHeight),G=B("select",r),q=B(),J=k??H,{compactSize:Y,compactItemClassnames:Q}=lc(G,J),[te,ce]=Gv("select",T,o),se=br(G),[ne,ae,ee]=pP(G,se),re=d.useMemo(()=>{const{mode:Pe}=e;if(Pe!=="combobox")return Pe===mP?"combobox":Pe},[e.mode]),le=re==="multiple"||re==="tags",pe=gP(e.suffixIcon,e.showArrow),Oe=(n=E??S)!==null&&n!==void 0?n:L,{status:ge,hasFeedback:Re,isFormItemInput:ye,feedbackIcon:Te}=d.useContext(Vr),Ae=$d(ge,w);let me;y!==void 0?me=y:re==="combobox"?me=null:me=(_==null?void 0:_("Select"))||d.createElement(Xv,{componentName:"Select"});const{suffixIcon:Ie,itemIcon:Le,removeIcon:Be,clearIcon:et}=hP(Object.assign(Object.assign({},V),{multiple:le,hasFeedback:Re,feedbackIcon:Te,showSuffixIcon:pe,prefixCls:G,componentName:"Select"})),rt=$===!0?{clearIcon:et}:$,Ze=Ln(V,["suffixIcon","itemIcon"]),Ve=ie(c||u,{[`${G}-dropdown-${J}`]:J==="rtl"},a,ee,se,ae),Ye=Go(Pe=>{var Ke;return(Ke=m??Y)!==null&&Ke!==void 0?Ke:Pe}),Ge=d.useContext(So),Fe=b??Ge,we=ie({[`${G}-lg`]:Ye==="large",[`${G}-sm`]:Ye==="small",[`${G}-rtl`]:J==="rtl",[`${G}-${te}`]:ce,[`${G}-in-form-item`]:ye},od(G,Ae,Re),Q,U==null?void 0:U.className,i,a,ee,se,ae),ze=d.useMemo(()=>v!==void 0?v:J==="rtl"?"bottomRight":"bottomLeft",[v,J]),[Me]=sc("SelectLike",M==null?void 0:M.zIndex);return ne(d.createElement(Sw,Object.assign({ref:t,virtual:j,showSearch:U==null?void 0:U.showSearch},Ze,{style:Object.assign(Object.assign({},U==null?void 0:U.style),O),dropdownMatchSelectWidth:Oe,transitionName:ra(q,"slide-up",P),builtinPlacements:uP(C,F),listHeight:p,listItemHeight:W,mode:re,prefixCls:G,placement:ze,direction:J,suffixIcon:Ie,menuItemSelectedIcon:Le,removeIcon:Be,allowClear:rt,notFoundContent:me,className:we,getPopupContainer:s||z,dropdownClassName:Ve,disabled:Fe,dropdownStyle:Object.assign(Object.assign({},M),{zIndex:Me}),maxCount:le?A:void 0,tagRender:le?R:void 0})))},yi=d.forwardRef(a7),s7=bw(yi);yi.SECRET_COMBOBOX_MODE_DO_NOT_USE=mP;yi.Option=xw;yi.OptGroup=ww;yi._InternalPanelDoNotUseOrYouWillBeFired=s7;const id=["xxl","xl","lg","md","sm","xs"],l7=e=>({xs:`(max-width: ${e.screenXSMax}px)`,sm:`(min-width: ${e.screenSM}px)`,md:`(min-width: ${e.screenMD}px)`,lg:`(min-width: ${e.screenLG}px)`,xl:`(min-width: ${e.screenXL}px)`,xxl:`(min-width: ${e.screenXXL}px)`}),c7=e=>{const t=e,n=[].concat(id).reverse();return n.forEach((r,o)=>{const i=r.toUpperCase(),a=`screen${i}Min`,s=`screen${i}`;if(!(t[a]<=t[s]))throw new Error(`${a}<=${s} fails : !(${t[a]}<=${t[s]})`);if(o{const n=new Map;let r=-1,o={};return{matchHandlers:{},dispatch(i){return o=i,n.forEach(a=>a(o)),n.size>=1},subscribe(i){return n.size||this.register(),r+=1,n.set(r,i),i(o),r},unsubscribe(i){n.delete(i),n.size||this.unregister()},unregister(){Object.keys(t).forEach(i=>{const a=t[i],s=this.matchHandlers[a];s==null||s.mql.removeListener(s==null?void 0:s.listener)}),n.clear()},register(){Object.keys(t).forEach(i=>{const a=t[i],s=u=>{let{matches:p}=u;this.dispatch(Object.assign(Object.assign({},o),{[i]:p}))},c=window.matchMedia(a);c.addListener(s),this.matchHandlers[a]={mql:c,listener:s},s(c)})},responsiveMap:t}},[e])}function kw(){const[,e]=d.useReducer(t=>t+1,0);return e}function yP(){let e=arguments.length>0&&arguments[0]!==void 0?arguments[0]:!0;const t=d.useRef({}),n=kw(),r=bP();return sn(()=>{const o=r.subscribe(i=>{t.current=i,e&&n()});return()=>r.unsubscribe(o)},[]),t.current}const Ql=e=>e?typeof e=="function"?e():e:null;function Ow(e){var t=e.children,n=e.prefixCls,r=e.id,o=e.overlayInnerStyle,i=e.className,a=e.style;return d.createElement("div",{className:ie("".concat(n,"-content"),i),style:a},d.createElement("div",{className:"".concat(n,"-inner"),id:r,role:"tooltip",style:o},typeof t=="function"?t():t))}var pl={shiftX:64,adjustY:1},vl={adjustX:1,shiftY:!0},Do=[0,0],u7={left:{points:["cr","cl"],overflow:vl,offset:[-4,0],targetOffset:Do},right:{points:["cl","cr"],overflow:vl,offset:[4,0],targetOffset:Do},top:{points:["bc","tc"],overflow:pl,offset:[0,-4],targetOffset:Do},bottom:{points:["tc","bc"],overflow:pl,offset:[0,4],targetOffset:Do},topLeft:{points:["bl","tl"],overflow:pl,offset:[0,-4],targetOffset:Do},leftTop:{points:["tr","tl"],overflow:vl,offset:[-4,0],targetOffset:Do},topRight:{points:["br","tr"],overflow:pl,offset:[0,-4],targetOffset:Do},rightTop:{points:["tl","tr"],overflow:vl,offset:[4,0],targetOffset:Do},bottomRight:{points:["tr","br"],overflow:pl,offset:[0,4],targetOffset:Do},rightBottom:{points:["bl","br"],overflow:vl,offset:[4,0],targetOffset:Do},bottomLeft:{points:["tl","bl"],overflow:pl,offset:[0,4],targetOffset:Do},leftBottom:{points:["br","bl"],overflow:vl,offset:[-4,0],targetOffset:Do}},d7=["overlayClassName","trigger","mouseEnterDelay","mouseLeaveDelay","overlayStyle","prefixCls","children","onVisibleChange","afterVisibleChange","transitionName","animation","motion","placement","align","destroyTooltipOnHide","defaultVisible","getTooltipContainer","overlayInnerStyle","arrowContent","overlay","id","showArrow"],f7=function(t,n){var r=t.overlayClassName,o=t.trigger,i=o===void 0?["hover"]:o,a=t.mouseEnterDelay,s=a===void 0?0:a,c=t.mouseLeaveDelay,u=c===void 0?.1:c,p=t.overlayStyle,v=t.prefixCls,h=v===void 0?"rc-tooltip":v,m=t.children,b=t.onVisibleChange,y=t.afterVisibleChange,w=t.transitionName,C=t.animation,S=t.motion,E=t.placement,k=E===void 0?"right":E,O=t.align,$=O===void 0?{}:O,T=t.destroyTooltipOnHide,M=T===void 0?!1:T,P=t.defaultVisible,R=t.getTooltipContainer,A=t.overlayInnerStyle;t.arrowContent;var V=t.overlay,z=t.id,B=t.showArrow,_=B===void 0?!0:B,H=Mt(t,d7),j=d.useRef(null);d.useImperativeHandle(n,function(){return j.current});var L=Z({},H);"visible"in t&&(L.popupVisible=t.visible);var F=function(){return d.createElement(Ow,{key:"content",prefixCls:h,id:z,overlayInnerStyle:A},V)};return d.createElement(Kv,$e({popupClassName:r,prefixCls:h,popup:F,action:i,builtinPlacements:u7,popupPlacement:k,ref:j,popupAlign:$,getPopupContainer:R,onPopupVisibleChange:b,afterPopupVisibleChange:y,popupTransitionName:w,popupAnimation:C,popupMotion:S,defaultPopupVisible:P,autoDestroy:M,mouseLeaveDelay:u,popupStyle:p,mouseEnterDelay:s,arrow:_},L),m)};const p7=d.forwardRef(f7);function $w(e){const{sizePopupArrow:t,borderRadiusXS:n,borderRadiusOuter:r}=e,o=t/2,i=0,a=o,s=r*1/Math.sqrt(2),c=o-r*(1-1/Math.sqrt(2)),u=o-n*(1/Math.sqrt(2)),p=r*(Math.sqrt(2)-1)+n*(1/Math.sqrt(2)),v=2*o-u,h=p,m=2*o-s,b=c,y=2*o-i,w=a,C=o*Math.sqrt(2)+r*(Math.sqrt(2)-2),S=r*(Math.sqrt(2)-1),E=`polygon(${S}px 100%, 50% ${S}px, ${2*o-S}px 100%, ${S}px 100%)`,k=`path('M ${i} ${a} A ${r} ${r} 0 0 0 ${s} ${c} L ${u} ${p} A ${n} ${n} 0 0 1 ${v} ${h} L ${m} ${b} A ${r} ${r} 0 0 0 ${y} ${w} Z')`;return{arrowShadowWidth:C,arrowPath:k,arrowPolygon:E}}const v7=(e,t,n)=>{const{sizePopupArrow:r,arrowPolygon:o,arrowPath:i,arrowShadowWidth:a,borderRadiusXS:s,calc:c}=e;return{pointerEvents:"none",width:r,height:r,overflow:"hidden","&::before":{position:"absolute",bottom:0,insetInlineStart:0,width:r,height:c(r).div(2).equal(),background:t,clipPath:{_multi_value_:!0,value:[o,i]},content:'""'},"&::after":{content:'""',position:"absolute",width:a,height:a,bottom:0,insetInline:0,margin:"auto",borderRadius:{_skip_check_:!0,value:`0 0 ${de(s)} 0`},transform:"translateY(50%) rotate(-135deg)",boxShadow:n,zIndex:0,background:"transparent"}}},wP=8;function Yv(e){const{contentRadius:t,limitVerticalRadius:n}=e,r=t>12?t+2:12;return{arrowOffsetHorizontal:r,arrowOffsetVertical:n?wP:r}}function Zf(e,t){return e?t:{}}function Iw(e,t,n){const{componentCls:r,boxShadowPopoverArrow:o,arrowOffsetVertical:i,arrowOffsetHorizontal:a}=e,{arrowDistance:s=0,arrowPlacement:c={left:!0,right:!0,top:!0,bottom:!0}}=n||{};return{[r]:Object.assign(Object.assign(Object.assign(Object.assign({[`${r}-arrow`]:[Object.assign(Object.assign({position:"absolute",zIndex:1,display:"block"},v7(e,t,o)),{"&:before":{background:t}})]},Zf(!!c.top,{[[`&-placement-top > ${r}-arrow`,`&-placement-topLeft > ${r}-arrow`,`&-placement-topRight > ${r}-arrow`].join(",")]:{bottom:s,transform:"translateY(100%) rotate(180deg)"},[`&-placement-top > ${r}-arrow`]:{left:{_skip_check_:!0,value:"50%"},transform:"translateX(-50%) translateY(100%) rotate(180deg)"},"&-placement-topLeft":{"--arrow-offset-horizontal":a,[`> ${r}-arrow`]:{left:{_skip_check_:!0,value:a}}},"&-placement-topRight":{"--arrow-offset-horizontal":`calc(100% - ${de(a)})`,[`> ${r}-arrow`]:{right:{_skip_check_:!0,value:a}}}})),Zf(!!c.bottom,{[[`&-placement-bottom > ${r}-arrow`,`&-placement-bottomLeft > ${r}-arrow`,`&-placement-bottomRight > ${r}-arrow`].join(",")]:{top:s,transform:"translateY(-100%)"},[`&-placement-bottom > ${r}-arrow`]:{left:{_skip_check_:!0,value:"50%"},transform:"translateX(-50%) translateY(-100%)"},"&-placement-bottomLeft":{"--arrow-offset-horizontal":a,[`> ${r}-arrow`]:{left:{_skip_check_:!0,value:a}}},"&-placement-bottomRight":{"--arrow-offset-horizontal":`calc(100% - ${de(a)})`,[`> ${r}-arrow`]:{right:{_skip_check_:!0,value:a}}}})),Zf(!!c.left,{[[`&-placement-left > ${r}-arrow`,`&-placement-leftTop > ${r}-arrow`,`&-placement-leftBottom > ${r}-arrow`].join(",")]:{right:{_skip_check_:!0,value:s},transform:"translateX(100%) rotate(90deg)"},[`&-placement-left > ${r}-arrow`]:{top:{_skip_check_:!0,value:"50%"},transform:"translateY(-50%) translateX(100%) rotate(90deg)"},[`&-placement-leftTop > ${r}-arrow`]:{top:i},[`&-placement-leftBottom > ${r}-arrow`]:{bottom:i}})),Zf(!!c.right,{[[`&-placement-right > ${r}-arrow`,`&-placement-rightTop > ${r}-arrow`,`&-placement-rightBottom > ${r}-arrow`].join(",")]:{left:{_skip_check_:!0,value:s},transform:"translateX(-100%) rotate(-90deg)"},[`&-placement-right > ${r}-arrow`]:{top:{_skip_check_:!0,value:"50%"},transform:"translateY(-50%) translateX(-100%) rotate(-90deg)"},[`&-placement-rightTop > ${r}-arrow`]:{top:i},[`&-placement-rightBottom > ${r}-arrow`]:{bottom:i}}))}}function h7(e,t,n,r){if(r===!1)return{adjustX:!1,adjustY:!1};const o=r&&typeof r=="object"?r:{},i={};switch(e){case"top":case"bottom":i.shiftX=t.arrowOffsetHorizontal*2+n,i.shiftY=!0,i.adjustY=!0;break;case"left":case"right":i.shiftY=t.arrowOffsetVertical*2+n,i.shiftX=!0,i.adjustX=!0;break}const a=Object.assign(Object.assign({},i),o);return a.shiftX||(a.adjustX=!0),a.shiftY||(a.adjustY=!0),a}const uk={left:{points:["cr","cl"]},right:{points:["cl","cr"]},top:{points:["bc","tc"]},bottom:{points:["tc","bc"]},topLeft:{points:["bl","tl"]},leftTop:{points:["tr","tl"]},topRight:{points:["br","tr"]},rightTop:{points:["tl","tr"]},bottomRight:{points:["tr","br"]},rightBottom:{points:["bl","br"]},bottomLeft:{points:["tl","bl"]},leftBottom:{points:["br","bl"]}},g7={topLeft:{points:["bl","tc"]},leftTop:{points:["tr","cl"]},topRight:{points:["br","tc"]},rightTop:{points:["tl","cr"]},bottomRight:{points:["tr","bc"]},rightBottom:{points:["bl","cr"]},bottomLeft:{points:["tl","bc"]},leftBottom:{points:["br","cl"]}},m7=new Set(["topLeft","topRight","bottomLeft","bottomRight","leftTop","leftBottom","rightTop","rightBottom"]);function xP(e){const{arrowWidth:t,autoAdjustOverflow:n,arrowPointAtCenter:r,offset:o,borderRadius:i,visibleFirst:a}=e,s=t/2,c={};return Object.keys(uk).forEach(u=>{const p=r&&g7[u]||uk[u],v=Object.assign(Object.assign({},p),{offset:[0,0],dynamicInset:!0});switch(c[u]=v,m7.has(u)&&(v.autoArrow=!1),u){case"top":case"topLeft":case"topRight":v.offset[1]=-s-o;break;case"bottom":case"bottomLeft":case"bottomRight":v.offset[1]=s+o;break;case"left":case"leftTop":case"leftBottom":v.offset[0]=-s-o;break;case"right":case"rightTop":case"rightBottom":v.offset[0]=s+o;break}const h=Yv({contentRadius:i,limitVerticalRadius:!0});if(r)switch(u){case"topLeft":case"bottomLeft":v.offset[0]=-h.arrowOffsetHorizontal-s;break;case"topRight":case"bottomRight":v.offset[0]=h.arrowOffsetHorizontal+s;break;case"leftTop":case"rightTop":v.offset[1]=-h.arrowOffsetHorizontal*2+s;break;case"leftBottom":case"rightBottom":v.offset[1]=h.arrowOffsetHorizontal*2-s;break}v.overflow=h7(u,h,t,n),a&&(v.htmlRegion="visibleFirst")}),c}const b7=e=>{const{componentCls:t,tooltipMaxWidth:n,tooltipColor:r,tooltipBg:o,tooltipBorderRadius:i,zIndexPopup:a,controlHeight:s,boxShadowSecondary:c,paddingSM:u,paddingXS:p}=e;return[{[t]:Object.assign(Object.assign(Object.assign(Object.assign({},jn(e)),{position:"absolute",zIndex:a,display:"block",width:"max-content",maxWidth:n,visibility:"visible","--valid-offset-x":"var(--arrow-offset-horizontal, var(--arrow-x))",transformOrigin:["var(--valid-offset-x, 50%)","var(--arrow-y, 50%)"].join(" "),"&-hidden":{display:"none"},"--antd-arrow-background-color":o,[`${t}-inner`]:{minWidth:"1em",minHeight:s,padding:`${de(e.calc(u).div(2).equal())} ${de(p)}`,color:r,textAlign:"start",textDecoration:"none",wordWrap:"break-word",backgroundColor:o,borderRadius:i,boxShadow:c,boxSizing:"border-box"},[["&-placement-left","&-placement-leftTop","&-placement-leftBottom","&-placement-right","&-placement-rightTop","&-placement-rightBottom"].join(",")]:{[`${t}-inner`]:{borderRadius:e.min(i,wP)}},[`${t}-content`]:{position:"relative"}}),NI(e,(v,h)=>{let{darkColor:m}=h;return{[`&${t}-${v}`]:{[`${t}-inner`]:{backgroundColor:m},[`${t}-arrow`]:{"--antd-arrow-background-color":m}}}})),{"&-rtl":{direction:"rtl"}})},Iw(e,"var(--antd-arrow-background-color)"),{[`${t}-pure`]:{position:"relative",maxWidth:"none",margin:e.sizePopupArrow}}]},y7=e=>Object.assign(Object.assign({zIndexPopup:e.zIndexPopupBase+70},Yv({contentRadius:e.borderRadius,limitVerticalRadius:!0})),$w(vn(e,{borderRadiusOuter:Math.min(e.borderRadiusOuter,4)}))),SP=function(e){let t=arguments.length>1&&arguments[1]!==void 0?arguments[1]:!0;return In("Tooltip",r=>{const{borderRadius:o,colorTextLightSolid:i,colorBgSpotlight:a}=r,s=vn(r,{tooltipMaxWidth:250,tooltipColor:i,tooltipBorderRadius:o,tooltipBg:a});return[b7(s),Sd(r,"zoom-big-fast")]},y7,{resetStyle:!1,injectStyle:t})(e)},w7=Zu.map(e=>`${e}-inverse`),x7=["success","processing","error","default","warning"];function CP(e){return(arguments.length>1&&arguments[1]!==void 0?arguments[1]:!0)?[].concat(Se(w7),Se(Zu)).includes(e):Zu.includes(e)}function S7(e){return x7.includes(e)}function EP(e,t){const n=CP(t),r=ie({[`${e}-${t}`]:t&&n}),o={},i={};return t&&!n&&(o.background=t,i["--antd-arrow-background-color"]=t),{className:r,overlayStyle:o,arrowStyle:i}}const C7=e=>{const{prefixCls:t,className:n,placement:r="top",title:o,color:i,overlayInnerStyle:a}=e,{getPrefixCls:s}=d.useContext(ht),c=s("tooltip",t),[u,p,v]=SP(c),h=EP(c,i),m=h.arrowStyle,b=Object.assign(Object.assign({},a),h.overlayStyle),y=ie(p,v,c,`${c}-pure`,`${c}-placement-${r}`,n,h.className);return u(d.createElement("div",{className:y,style:m},d.createElement("div",{className:`${c}-arrow`}),d.createElement(Ow,Object.assign({},e,{className:p,prefixCls:c,overlayInnerStyle:b}),o)))};var E7=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{var n,r;const{prefixCls:o,openClassName:i,getTooltipContainer:a,overlayClassName:s,color:c,overlayInnerStyle:u,children:p,afterOpenChange:v,afterVisibleChange:h,destroyTooltipOnHide:m,arrow:b=!0,title:y,overlay:w,builtinPlacements:C,arrowPointAtCenter:S=!1,autoAdjustOverflow:E=!0}=e,k=!!b,[,O]=Ir(),{getPopupContainer:$,getPrefixCls:T,direction:M}=d.useContext(ht),P=As(),R=d.useRef(null),A=()=>{var me;(me=R.current)===null||me===void 0||me.forceAlign()};d.useImperativeHandle(t,()=>{var me;return{forceAlign:A,forcePopupAlign:()=>{P.deprecated(!1,"forcePopupAlign","forceAlign"),A()},nativeElement:(me=R.current)===null||me===void 0?void 0:me.nativeElement}});const[V,z]=Dn(!1,{value:(n=e.open)!==null&&n!==void 0?n:e.visible,defaultValue:(r=e.defaultOpen)!==null&&r!==void 0?r:e.defaultVisible}),B=!y&&!w&&y!==0,_=me=>{var Ie,Le;z(B?!1:me),B||((Ie=e.onOpenChange)===null||Ie===void 0||Ie.call(e,me),(Le=e.onVisibleChange)===null||Le===void 0||Le.call(e,me))},H=d.useMemo(()=>{var me,Ie;let Le=S;return typeof b=="object"&&(Le=(Ie=(me=b.pointAtCenter)!==null&&me!==void 0?me:b.arrowPointAtCenter)!==null&&Ie!==void 0?Ie:S),C||xP({arrowPointAtCenter:Le,autoAdjustOverflow:E,arrowWidth:k?O.sizePopupArrow:0,borderRadius:O.borderRadius,offset:O.marginXXS,visibleFirst:!0})},[S,b,C,O]),j=d.useMemo(()=>y===0?y:w||y||"",[w,y]),L=d.createElement(nd,{space:!0},typeof j=="function"?j():j),{getPopupContainer:F,placement:U="top",mouseEnterDelay:D=.1,mouseLeaveDelay:W=.1,overlayStyle:G,rootClassName:q}=e,J=E7(e,["getPopupContainer","placement","mouseEnterDelay","mouseLeaveDelay","overlayStyle","rootClassName"]),Y=T("tooltip",o),Q=T(),te=e["data-popover-inject"];let ce=V;!("open"in e)&&!("visible"in e)&&B&&(ce=!1);const se=d.isValidElement(p)&&!QI(p)?p:d.createElement("span",null,p),ne=se.props,ae=!ne.className||typeof ne.className=="string"?ie(ne.className,i||`${Y}-open`):ne.className,[ee,re,le]=SP(Y,!te),pe=EP(Y,c),Oe=pe.arrowStyle,ge=Object.assign(Object.assign({},u),pe.overlayStyle),Re=ie(s,{[`${Y}-rtl`]:M==="rtl"},pe.className,q,re,le),[ye,Te]=sc("Tooltip",J.zIndex),Ae=d.createElement(p7,Object.assign({},J,{zIndex:ye,showArrow:k,placement:U,mouseEnterDelay:D,mouseLeaveDelay:W,prefixCls:Y,overlayClassName:Re,overlayStyle:Object.assign(Object.assign({},Oe),G),getTooltipContainer:F||a||$,ref:R,builtinPlacements:H,overlay:L,visible:ce,onVisibleChange:_,afterVisibleChange:v??h,overlayInnerStyle:ge,arrowContent:d.createElement("span",{className:`${Y}-arrow-content`}),motion:{motionName:ra(Q,"zoom-big-fast",e.transitionName),motionDeadline:1e3},destroyTooltipOnHide:!!m}),ce?Dr(se,{className:ae}):se);return ee(d.createElement(Rv.Provider,{value:Te},Ae))}),gi=k7;gi._InternalPanelDoNotUseOrYouWillBeFired=C7;const O7=e=>{const{componentCls:t,popoverColor:n,titleMinWidth:r,fontWeightStrong:o,innerPadding:i,boxShadowSecondary:a,colorTextHeading:s,borderRadiusLG:c,zIndexPopup:u,titleMarginBottom:p,colorBgElevated:v,popoverBg:h,titleBorderBottom:m,innerContentPadding:b,titlePadding:y}=e;return[{[t]:Object.assign(Object.assign({},jn(e)),{position:"absolute",top:0,left:{_skip_check_:!0,value:0},zIndex:u,fontWeight:"normal",whiteSpace:"normal",textAlign:"start",cursor:"auto",userSelect:"text","--valid-offset-x":"var(--arrow-offset-horizontal, var(--arrow-x))",transformOrigin:["var(--valid-offset-x, 50%)","var(--arrow-y, 50%)"].join(" "),"--antd-arrow-background-color":v,width:"max-content",maxWidth:"100vw","&-rtl":{direction:"rtl"},"&-hidden":{display:"none"},[`${t}-content`]:{position:"relative"},[`${t}-inner`]:{backgroundColor:h,backgroundClip:"padding-box",borderRadius:c,boxShadow:a,padding:i},[`${t}-title`]:{minWidth:r,marginBottom:p,color:s,fontWeight:o,borderBottom:m,padding:y},[`${t}-inner-content`]:{color:n,padding:b}})},Iw(e,"var(--antd-arrow-background-color)"),{[`${t}-pure`]:{position:"relative",maxWidth:"none",margin:e.sizePopupArrow,display:"inline-block",[`${t}-content`]:{display:"inline-block"}}}]},$7=e=>{const{componentCls:t}=e;return{[t]:Zu.map(n=>{const r=e[`${n}6`];return{[`&${t}-${n}`]:{"--antd-arrow-background-color":r,[`${t}-inner`]:{backgroundColor:r},[`${t}-arrow`]:{background:"transparent"}}}})}},I7=e=>{const{lineWidth:t,controlHeight:n,fontHeight:r,padding:o,wireframe:i,zIndexPopupBase:a,borderRadiusLG:s,marginXS:c,lineType:u,colorSplit:p,paddingSM:v}=e,h=n-r,m=h/2,b=h/2-t,y=o;return Object.assign(Object.assign(Object.assign({titleMinWidth:177,zIndexPopup:a+30},$w(e)),Yv({contentRadius:s,limitVerticalRadius:!0})),{innerPadding:i?0:12,titleMarginBottom:i?0:c,titlePadding:i?`${m}px ${y}px ${b}px`:0,titleBorderBottom:i?`${t}px ${u} ${p}`:"none",innerContentPadding:i?`${v}px ${y}px`:0})},kP=In("Popover",e=>{const{colorBgElevated:t,colorText:n}=e,r=vn(e,{popoverBg:t,popoverColor:n});return[O7(r),$7(r),Sd(r,"zoom-big")]},I7,{resetStyle:!1,deprecatedTokens:[["width","titleMinWidth"],["minWidth","titleMinWidth"]]});var T7=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{let{title:t,content:n,prefixCls:r}=e;return!t&&!n?null:d.createElement(d.Fragment,null,t&&d.createElement("div",{className:`${r}-title`},t),n&&d.createElement("div",{className:`${r}-inner-content`},n))},P7=e=>{const{hashId:t,prefixCls:n,className:r,style:o,placement:i="top",title:a,content:s,children:c}=e,u=Ql(a),p=Ql(s),v=ie(t,n,`${n}-pure`,`${n}-placement-${i}`,r);return d.createElement("div",{className:v,style:o},d.createElement("div",{className:`${n}-arrow`}),d.createElement(Ow,Object.assign({},e,{className:t,prefixCls:n}),c||d.createElement(OP,{prefixCls:n,title:u,content:p})))},$P=e=>{const{prefixCls:t,className:n}=e,r=T7(e,["prefixCls","className"]),{getPrefixCls:o}=d.useContext(ht),i=o("popover",t),[a,s,c]=kP(i);return a(d.createElement(P7,Object.assign({},r,{prefixCls:i,hashId:s,className:ie(n,c)})))};var M7=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{var n,r;const{prefixCls:o,title:i,content:a,overlayClassName:s,placement:c="top",trigger:u="hover",children:p,mouseEnterDelay:v=.1,mouseLeaveDelay:h=.1,onOpenChange:m,overlayStyle:b={}}=e,y=M7(e,["prefixCls","title","content","overlayClassName","placement","trigger","children","mouseEnterDelay","mouseLeaveDelay","onOpenChange","overlayStyle"]),{getPrefixCls:w}=d.useContext(ht),C=w("popover",o),[S,E,k]=kP(C),O=w(),$=ie(s,E,k),[T,M]=Dn(!1,{value:(n=e.open)!==null&&n!==void 0?n:e.visible,defaultValue:(r=e.defaultOpen)!==null&&r!==void 0?r:e.defaultVisible}),P=(B,_)=>{M(B,!0),m==null||m(B,_)},R=B=>{B.keyCode===De.ESC&&P(!1,B)},A=B=>{P(B)},V=Ql(i),z=Ql(a);return S(d.createElement(gi,Object.assign({placement:c,trigger:u,mouseEnterDelay:v,mouseLeaveDelay:h,overlayStyle:b},y,{prefixCls:C,overlayClassName:$,ref:t,open:T,onOpenChange:A,overlay:V||z?d.createElement(OP,{prefixCls:C,title:V,content:z}):null,transitionName:ra(O,"zoom-big",y.transitionName),"data-popover-inject":!0}),Dr(p,{onKeyDown:B=>{var _,H;d.isValidElement(p)&&((H=p==null?void 0:(_=p.props).onKeyDown)===null||H===void 0||H.call(_,B)),R(B)}})))}),IP=N7;IP._InternalPanelDoNotUseOrYouWillBeFired=$P;var R7=De.ESC,D7=De.TAB;function j7(e){var t=e.visible,n=e.triggerRef,r=e.onVisibleChange,o=e.autoFocus,i=e.overlayRef,a=d.useRef(!1),s=function(){if(t){var v,h;(v=n.current)===null||v===void 0||(h=v.focus)===null||h===void 0||h.call(v),r==null||r(!1)}},c=function(){var v;return(v=i.current)!==null&&v!==void 0&&v.focus?(i.current.focus(),a.current=!0,!0):!1},u=function(v){switch(v.keyCode){case R7:s();break;case D7:var h=!1;a.current||(h=c()),h?v.preventDefault():s();break}};d.useEffect(function(){return t?(window.addEventListener("keydown",u),o&&bn(c,3),function(){window.removeEventListener("keydown",u),a.current=!1}):function(){a.current=!1}},[t])}var L7=d.forwardRef(function(e,t){var n=e.overlay,r=e.arrow,o=e.prefixCls,i=d.useMemo(function(){var s;return typeof n=="function"?s=n():s=n,s},[n]),a=Wr(t,i==null?void 0:i.ref);return ue.createElement(ue.Fragment,null,r&&ue.createElement("div",{className:"".concat(o,"-arrow")}),ue.cloneElement(i,{ref:vi(i)?a:void 0}))}),hl={adjustX:1,adjustY:1},gl=[0,0],B7={topLeft:{points:["bl","tl"],overflow:hl,offset:[0,-4],targetOffset:gl},top:{points:["bc","tc"],overflow:hl,offset:[0,-4],targetOffset:gl},topRight:{points:["br","tr"],overflow:hl,offset:[0,-4],targetOffset:gl},bottomLeft:{points:["tl","bl"],overflow:hl,offset:[0,4],targetOffset:gl},bottom:{points:["tc","bc"],overflow:hl,offset:[0,4],targetOffset:gl},bottomRight:{points:["tr","br"],overflow:hl,offset:[0,4],targetOffset:gl}},A7=["arrow","prefixCls","transitionName","animation","align","placement","placements","getPopupContainer","showAction","hideAction","overlayClassName","overlayStyle","visible","trigger","autoFocus","overlay","children","onVisibleChange"];function z7(e,t){var n,r=e.arrow,o=r===void 0?!1:r,i=e.prefixCls,a=i===void 0?"rc-dropdown":i,s=e.transitionName,c=e.animation,u=e.align,p=e.placement,v=p===void 0?"bottomLeft":p,h=e.placements,m=h===void 0?B7:h,b=e.getPopupContainer,y=e.showAction,w=e.hideAction,C=e.overlayClassName,S=e.overlayStyle,E=e.visible,k=e.trigger,O=k===void 0?["hover"]:k,$=e.autoFocus,T=e.overlay,M=e.children,P=e.onVisibleChange,R=Mt(e,A7),A=ue.useState(),V=ve(A,2),z=V[0],B=V[1],_="visible"in e?E:z,H=ue.useRef(null),j=ue.useRef(null),L=ue.useRef(null);ue.useImperativeHandle(t,function(){return H.current});var F=function(te){B(te),P==null||P(te)};j7({visible:_,triggerRef:L,onVisibleChange:F,autoFocus:$,overlayRef:j});var U=function(te){var ce=e.onOverlayClick;B(!1),ce&&ce(te)},D=function(){return ue.createElement(L7,{ref:j,overlay:T,prefixCls:a,arrow:o})},W=function(){return typeof T=="function"?D:D()},G=function(){var te=e.minOverlayWidthMatchTrigger,ce=e.alignPoint;return"minOverlayWidthMatchTrigger"in e?te:!ce},q=function(){var te=e.openClassName;return te!==void 0?te:"".concat(a,"-open")},J=ue.cloneElement(M,{className:ie((n=M.props)===null||n===void 0?void 0:n.className,_&&q()),ref:vi(M)?Wr(L,M.ref):void 0}),Y=w;return!Y&&O.indexOf("contextMenu")!==-1&&(Y=["click"]),ue.createElement(Kv,$e({builtinPlacements:m},R,{prefixCls:a,ref:H,popupClassName:ie(C,K({},"".concat(a,"-show-arrow"),o)),popupStyle:S,action:O,showAction:y,hideAction:Y,popupPlacement:v,popupAlign:u,popupTransitionName:s,popupAnimation:c,popupVisible:_,stretch:G()?"minWidth":"",popup:W(),onPopupVisibleChange:F,onPopupClick:U,getPopupContainer:b}),J)}const H7=ue.forwardRef(z7);var TP=d.createContext(null);function PP(e,t){return e===void 0?null:"".concat(e,"-").concat(t)}function MP(e){var t=d.useContext(TP);return PP(t,e)}var F7=["children","locked"],mi=d.createContext(null);function _7(e,t){var n=Z({},e);return Object.keys(t).forEach(function(r){var o=t[r];o!==void 0&&(n[r]=o)}),n}function ad(e){var t=e.children,n=e.locked,r=Mt(e,F7),o=d.useContext(mi),i=Ls(function(){return _7(o,r)},[o,r],function(a,s){return!n&&(a[0]!==s[0]||!zi(a[1],s[1],!0))});return d.createElement(mi.Provider,{value:i},t)}var V7=[],NP=d.createContext(null);function Qv(){return d.useContext(NP)}var RP=d.createContext(V7);function fc(e){var t=d.useContext(RP);return d.useMemo(function(){return e!==void 0?[].concat(Se(t),[e]):t},[t,e])}var DP=d.createContext(null),Tw=d.createContext({});function dk(e){var t=arguments.length>1&&arguments[1]!==void 0?arguments[1]:!1;if(xd(e)){var n=e.nodeName.toLowerCase(),r=["input","select","textarea","button"].includes(n)||e.isContentEditable||n==="a"&&!!e.getAttribute("href"),o=e.getAttribute("tabindex"),i=Number(o),a=null;return o&&!Number.isNaN(i)?a=i:r&&a===null&&(a=0),r&&e.disabled&&(a=null),a!==null&&(a>=0||t&&a<0)}return!1}function W7(e){var t=arguments.length>1&&arguments[1]!==void 0?arguments[1]:!1,n=Se(e.querySelectorAll("*")).filter(function(r){return dk(r,t)});return dk(e,t)&&n.unshift(e),n}var Ib=De.LEFT,Tb=De.RIGHT,Pb=De.UP,Rp=De.DOWN,Dp=De.ENTER,jP=De.ESC,tu=De.HOME,nu=De.END,fk=[Pb,Rp,Ib,Tb];function U7(e,t,n,r){var o,i="prev",a="next",s="children",c="parent";if(e==="inline"&&r===Dp)return{inlineTrigger:!0};var u=K(K({},Pb,i),Rp,a),p=K(K(K(K({},Ib,n?a:i),Tb,n?i:a),Rp,s),Dp,s),v=K(K(K(K(K(K({},Pb,i),Rp,a),Dp,s),jP,c),Ib,n?s:c),Tb,n?c:s),h={inline:u,horizontal:p,vertical:v,inlineSub:u,horizontalSub:v,verticalSub:v},m=(o=h["".concat(e).concat(t?"":"Sub")])===null||o===void 0?void 0:o[r];switch(m){case i:return{offset:-1,sibling:!0};case a:return{offset:1,sibling:!0};case c:return{offset:-1,sibling:!1};case s:return{offset:1,sibling:!1};default:return null}}function K7(e){for(var t=e;t;){if(t.getAttribute("data-menu-list"))return t;t=t.parentElement}return null}function q7(e,t){for(var n=e||document.activeElement;n;){if(t.has(n))return n;n=n.parentElement}return null}function Pw(e,t){var n=W7(e,!0);return n.filter(function(r){return t.has(r)})}function pk(e,t,n){var r=arguments.length>3&&arguments[3]!==void 0?arguments[3]:1;if(!e)return null;var o=Pw(e,t),i=o.length,a=o.findIndex(function(s){return n===s});return r<0?a===-1?a=i-1:a-=1:r>0&&(a+=1),a=(a+i)%i,o[a]}var Mb=function(t,n){var r=new Set,o=new Map,i=new Map;return t.forEach(function(a){var s=document.querySelector("[data-menu-id='".concat(PP(n,a),"']"));s&&(r.add(s),i.set(s,a),o.set(a,s))}),{elements:r,key2element:o,element2key:i}};function X7(e,t,n,r,o,i,a,s,c,u){var p=d.useRef(),v=d.useRef();v.current=t;var h=function(){bn.cancel(p.current)};return d.useEffect(function(){return function(){h()}},[]),function(m){var b=m.which;if([].concat(fk,[Dp,jP,tu,nu]).includes(b)){var y=i(),w=Mb(y,r),C=w,S=C.elements,E=C.key2element,k=C.element2key,O=E.get(t),$=q7(O,S),T=k.get($),M=U7(e,a(T,!0).length===1,n,b);if(!M&&b!==tu&&b!==nu)return;(fk.includes(b)||[tu,nu].includes(b))&&m.preventDefault();var P=function(j){if(j){var L=j,F=j.querySelector("a");F!=null&&F.getAttribute("href")&&(L=F);var U=k.get(j);s(U),h(),p.current=bn(function(){v.current===U&&L.focus()})}};if([tu,nu].includes(b)||M.sibling||!$){var R;!$||e==="inline"?R=o.current:R=K7($);var A,V=Pw(R,S);b===tu?A=V[0]:b===nu?A=V[V.length-1]:A=pk(R,S,$,M.offset),P(A)}else if(M.inlineTrigger)c(T);else if(M.offset>0)c(T,!0),h(),p.current=bn(function(){w=Mb(y,r);var H=$.getAttribute("aria-controls"),j=document.getElementById(H),L=pk(j,w.elements);P(L)},5);else if(M.offset<0){var z=a(T,!0),B=z[z.length-2],_=E.get(B);c(B,!1),P(_)}}u==null||u(m)}}function G7(e){Promise.resolve().then(e)}var Mw="__RC_UTIL_PATH_SPLIT__",vk=function(t){return t.join(Mw)},Y7=function(t){return t.split(Mw)},Nb="rc-menu-more";function Q7(){var e=d.useState({}),t=ve(e,2),n=t[1],r=d.useRef(new Map),o=d.useRef(new Map),i=d.useState([]),a=ve(i,2),s=a[0],c=a[1],u=d.useRef(0),p=d.useRef(!1),v=function(){p.current||n({})},h=d.useCallback(function(E,k){var O=vk(k);o.current.set(O,E),r.current.set(E,O),u.current+=1;var $=u.current;G7(function(){$===u.current&&v()})},[]),m=d.useCallback(function(E,k){var O=vk(k);o.current.delete(O),r.current.delete(E)},[]),b=d.useCallback(function(E){c(E)},[]),y=d.useCallback(function(E,k){var O=r.current.get(E)||"",$=Y7(O);return k&&s.includes($[0])&&$.unshift(Nb),$},[s]),w=d.useCallback(function(E,k){return E.filter(function(O){return O!==void 0}).some(function(O){var $=y(O,!0);return $.includes(k)})},[y]),C=function(){var k=Se(r.current.keys());return s.length&&k.push(Nb),k},S=d.useCallback(function(E){var k="".concat(r.current.get(E)).concat(Mw),O=new Set;return Se(o.current.keys()).forEach(function($){$.startsWith(k)&&O.add(o.current.get($))}),O},[]);return d.useEffect(function(){return function(){p.current=!0}},[]),{registerPath:h,unregisterPath:m,refreshOverflowKeys:b,isSubPathKey:w,getKeyPath:y,getKeys:C,getSubPathKeys:S}}function fu(e){var t=d.useRef(e);t.current=e;var n=d.useCallback(function(){for(var r,o=arguments.length,i=new Array(o),a=0;a1&&(S.motionAppear=!1);var E=S.onVisibleChanged;return S.onVisibleChanged=function(k){return!h.current&&!k&&w(!0),E==null?void 0:E(k)},y?null:d.createElement(ad,{mode:i,locked:!h.current},d.createElement(Xo,$e({visible:C},S,{forceRender:c,removeOnLeave:!1,leavedClassName:"".concat(s,"-hidden")}),function(k){var O=k.className,$=k.style;return d.createElement(Nw,{id:t,className:O,style:$},o)}))}var v9=["style","className","title","eventKey","warnKey","disabled","internalPopupClose","children","itemIcon","expandIcon","popupClassName","popupOffset","popupStyle","onClick","onMouseEnter","onMouseLeave","onTitleClick","onTitleMouseEnter","onTitleMouseLeave"],h9=["active"],g9=d.forwardRef(function(e,t){var n=e.style,r=e.className,o=e.title,i=e.eventKey;e.warnKey;var a=e.disabled,s=e.internalPopupClose,c=e.children,u=e.itemIcon,p=e.expandIcon,v=e.popupClassName,h=e.popupOffset,m=e.popupStyle,b=e.onClick,y=e.onMouseEnter,w=e.onMouseLeave,C=e.onTitleClick,S=e.onTitleMouseEnter,E=e.onTitleMouseLeave,k=Mt(e,v9),O=MP(i),$=d.useContext(mi),T=$.prefixCls,M=$.mode,P=$.openKeys,R=$.disabled,A=$.overflowDisabled,V=$.activeKey,z=$.selectedKeys,B=$.itemIcon,_=$.expandIcon,H=$.onItemClick,j=$.onOpenChange,L=$.onActive,F=d.useContext(Tw),U=F._internalRenderSubMenuItem,D=d.useContext(DP),W=D.isSubPathKey,G=fc(),q="".concat(T,"-submenu"),J=R||a,Y=d.useRef(),Q=d.useRef(),te=u??B,ce=p??_,se=P.includes(i),ne=!A&&se,ae=W(z,i),ee=LP(i,J,S,E),re=ee.active,le=Mt(ee,h9),pe=d.useState(!1),Oe=ve(pe,2),ge=Oe[0],Re=Oe[1],ye=function(ze){J||Re(ze)},Te=function(ze){ye(!0),y==null||y({key:i,domEvent:ze})},Ae=function(ze){ye(!1),w==null||w({key:i,domEvent:ze})},me=d.useMemo(function(){return re||(M!=="inline"?ge||W([V],i):!1)},[M,re,V,ge,i,W]),Ie=BP(G.length),Le=function(ze){J||(C==null||C({key:i,domEvent:ze}),M==="inline"&&j(i,!se))},Be=fu(function(we){b==null||b(ev(we)),H(we)}),et=function(ze){M!=="inline"&&j(i,ze)},rt=function(){L(i)},Ze=O&&"".concat(O,"-popup"),Ve=d.createElement("div",$e({role:"menuitem",style:Ie,className:"".concat(q,"-title"),tabIndex:J?null:-1,ref:Y,title:typeof o=="string"?o:null,"data-menu-id":A&&O?null:O,"aria-expanded":ne,"aria-haspopup":!0,"aria-controls":Ze,"aria-disabled":J,onClick:Le,onFocus:rt},le),o,d.createElement(AP,{icon:M!=="horizontal"?ce:void 0,props:Z(Z({},e),{},{isOpen:ne,isSubMenu:!0})},d.createElement("i",{className:"".concat(q,"-arrow")}))),Ye=d.useRef(M);if(M!=="inline"&&G.length>1?Ye.current="vertical":Ye.current=M,!A){var Ge=Ye.current;Ve=d.createElement(f9,{mode:Ge,prefixCls:q,visible:!s&&ne&&M!=="inline",popupClassName:v,popupOffset:h,popupStyle:m,popup:d.createElement(ad,{mode:Ge==="horizontal"?"vertical":Ge},d.createElement(Nw,{id:Ze,ref:Q},c)),disabled:J,onVisibleChange:et},Ve)}var Fe=d.createElement(Di.Item,$e({ref:t,role:"none"},k,{component:"li",style:n,className:ie(q,"".concat(q,"-").concat(M),r,K(K(K(K({},"".concat(q,"-open"),ne),"".concat(q,"-active"),me),"".concat(q,"-selected"),ae),"".concat(q,"-disabled"),J)),onMouseEnter:Te,onMouseLeave:Ae}),Ve,!A&&d.createElement(p9,{id:Ze,open:ne,keyPath:G},c));return U&&(Fe=U(Fe,e,{selected:ae,active:me,open:ne,disabled:J})),d.createElement(ad,{onItemClick:Be,mode:M==="horizontal"?"vertical":M,itemIcon:te,expandIcon:ce},Fe)}),Jv=d.forwardRef(function(e,t){var n=e.eventKey,r=e.children,o=fc(n),i=Rw(r,o),a=Qv();d.useEffect(function(){if(a)return a.registerPath(n,o),function(){a.unregisterPath(n,o)}},[o]);var s;return a?s=i:s=d.createElement(g9,$e({ref:t},e),i),d.createElement(RP.Provider,{value:o},s)});function Dw(e){var t=e.className,n=e.style,r=d.useContext(mi),o=r.prefixCls,i=Qv();return i?null:d.createElement("li",{role:"separator",className:ie("".concat(o,"-item-divider"),t),style:n})}var m9=["className","title","eventKey","children"],b9=d.forwardRef(function(e,t){var n=e.className,r=e.title;e.eventKey;var o=e.children,i=Mt(e,m9),a=d.useContext(mi),s=a.prefixCls,c="".concat(s,"-item-group");return d.createElement("li",$e({ref:t,role:"presentation"},i,{onClick:function(p){return p.stopPropagation()},className:ie(c,n)}),d.createElement("div",{role:"presentation",className:"".concat(c,"-title"),title:typeof r=="string"?r:void 0},r),d.createElement("ul",{role:"group",className:"".concat(c,"-list")},o))}),jw=d.forwardRef(function(e,t){var n=e.eventKey,r=e.children,o=fc(n),i=Rw(r,o),a=Qv();return a?i:d.createElement(b9,$e({ref:t},Ln(e,["warnKey"])),i)}),y9=["label","children","key","type","extra"];function Rb(e,t,n){var r=t.item,o=t.group,i=t.submenu,a=t.divider;return(e||[]).map(function(s,c){if(s&&st(s)==="object"){var u=s,p=u.label,v=u.children,h=u.key,m=u.type,b=u.extra,y=Mt(u,y9),w=h??"tmp-".concat(c);return v||m==="group"?m==="group"?d.createElement(o,$e({key:w},y,{title:p}),Rb(v,t,n)):d.createElement(i,$e({key:w},y,{title:p}),Rb(v,t,n)):m==="divider"?d.createElement(a,$e({key:w},y)):d.createElement(r,$e({key:w},y),p,(!!b||b===0)&&d.createElement("span",{className:"".concat(n,"-item-extra")},b))}return null}).filter(function(s){return s})}function gk(e,t,n,r,o){var i=e,a=Z({divider:Dw,item:Zv,group:jw,submenu:Jv},r);return t&&(i=Rb(t,a,o)),Rw(i,n)}var w9=["prefixCls","rootClassName","style","className","tabIndex","items","children","direction","id","mode","inlineCollapsed","disabled","disabledOverflow","subMenuOpenDelay","subMenuCloseDelay","forceSubMenuRender","defaultOpenKeys","openKeys","activeKey","defaultActiveFirst","selectable","multiple","defaultSelectedKeys","selectedKeys","onSelect","onDeselect","inlineIndent","motion","defaultMotions","triggerSubMenuAction","builtinPlacements","itemIcon","expandIcon","overflowedIndicator","overflowedIndicatorPopupClassName","getPopupContainer","onClick","onOpenChange","onKeyDown","openAnimation","openTransitionName","_internalRenderMenuItem","_internalRenderSubMenuItem","_internalComponents"],fs=[],x9=d.forwardRef(function(e,t){var n,r=e,o=r.prefixCls,i=o===void 0?"rc-menu":o,a=r.rootClassName,s=r.style,c=r.className,u=r.tabIndex,p=u===void 0?0:u,v=r.items,h=r.children,m=r.direction,b=r.id,y=r.mode,w=y===void 0?"vertical":y,C=r.inlineCollapsed,S=r.disabled,E=r.disabledOverflow,k=r.subMenuOpenDelay,O=k===void 0?.1:k,$=r.subMenuCloseDelay,T=$===void 0?.1:$,M=r.forceSubMenuRender,P=r.defaultOpenKeys,R=r.openKeys,A=r.activeKey,V=r.defaultActiveFirst,z=r.selectable,B=z===void 0?!0:z,_=r.multiple,H=_===void 0?!1:_,j=r.defaultSelectedKeys,L=r.selectedKeys,F=r.onSelect,U=r.onDeselect,D=r.inlineIndent,W=D===void 0?24:D,G=r.motion,q=r.defaultMotions,J=r.triggerSubMenuAction,Y=J===void 0?"hover":J,Q=r.builtinPlacements,te=r.itemIcon,ce=r.expandIcon,se=r.overflowedIndicator,ne=se===void 0?"...":se,ae=r.overflowedIndicatorPopupClassName,ee=r.getPopupContainer,re=r.onClick,le=r.onOpenChange,pe=r.onKeyDown;r.openAnimation,r.openTransitionName;var Oe=r._internalRenderMenuItem,ge=r._internalRenderSubMenuItem,Re=r._internalComponents,ye=Mt(r,w9),Te=d.useMemo(function(){return[gk(h,v,fs,Re,i),gk(h,v,fs,{},i)]},[h,v,Re]),Ae=ve(Te,2),me=Ae[0],Ie=Ae[1],Le=d.useState(!1),Be=ve(Le,2),et=Be[0],rt=Be[1],Ze=d.useRef(),Ve=J7(b),Ye=m==="rtl",Ge=Dn(P,{value:R,postState:function(Rt){return Rt||fs}}),Fe=ve(Ge,2),we=Fe[0],ze=Fe[1],Me=function(Rt){var Ht=arguments.length>1&&arguments[1]!==void 0?arguments[1]:!1;function Jt(){ze(Rt),le==null||le(Rt)}Ht?pi.flushSync(Jt):Jt()},Pe=d.useState(we),Ke=ve(Pe,2),St=Ke[0],Ft=Ke[1],Lt=d.useRef(!1),Ct=d.useMemo(function(){return(w==="inline"||w==="vertical")&&C?["vertical",C]:[w,!1]},[w,C]),Xt=ve(Ct,2),Pt=Xt[0],Gt=Xt[1],ft=Pt==="inline",Je=d.useState(Pt),He=ve(Je,2),We=He[0],Et=He[1],wt=d.useState(Gt),_e=ve(wt,2),qe=_e[0],ot=_e[1];d.useEffect(function(){Et(Pt),ot(Gt),Lt.current&&(ft?ze(St):Me(fs))},[Pt,Gt]);var at=d.useState(0),xt=ve(at,2),_t=xt[0],pt=xt[1],dt=_t>=me.length-1||We!=="horizontal"||E;d.useEffect(function(){ft&&Ft(we)},[we]),d.useEffect(function(){return Lt.current=!0,function(){Lt.current=!1}},[]);var $t=Q7(),kt=$t.registerPath,Kt=$t.unregisterPath,ln=$t.refreshOverflowKeys,Yt=$t.isSubPathKey,un=$t.getKeyPath,ut=$t.getKeys,lt=$t.getSubPathKeys,gt=d.useMemo(function(){return{registerPath:kt,unregisterPath:Kt}},[kt,Kt]),Qt=d.useMemo(function(){return{isSubPathKey:Yt}},[Yt]);d.useEffect(function(){ln(dt?fs:me.slice(_t+1).map(function(ct){return ct.key}))},[_t,dt]);var dn=Dn(A||V&&((n=me[0])===null||n===void 0?void 0:n.key),{value:A}),tn=ve(dn,2),Sn=tn[0],Xn=tn[1],or=fu(function(ct){Xn(ct)}),tr=fu(function(){Xn(void 0)});d.useImperativeHandle(t,function(){return{list:Ze.current,focus:function(Rt){var Ht,Jt=ut(),an=Mb(Jt,Ve),_n=an.elements,Cn=an.key2element,hr=an.element2key,ir=Pw(Ze.current,_n),Wt=Sn??(ir[0]?hr.get(ir[0]):(Ht=me.find(function(At){return!At.props.disabled}))===null||Ht===void 0?void 0:Ht.key),ar=Cn.get(Wt);if(Wt&&ar){var Tr;ar==null||(Tr=ar.focus)===null||Tr===void 0||Tr.call(ar,Rt)}}}});var mt=Dn(j||[],{value:L,postState:function(Rt){return Array.isArray(Rt)?Rt:Rt==null?fs:[Rt]}}),Bt=ve(mt,2),Zt=Bt[0],hn=Bt[1],dr=function(Rt){if(B){var Ht=Rt.key,Jt=Zt.includes(Ht),an;H?Jt?an=Zt.filter(function(Cn){return Cn!==Ht}):an=[].concat(Se(Zt),[Ht]):an=[Ht],hn(an);var _n=Z(Z({},Rt),{},{selectedKeys:an});Jt?U==null||U(_n):F==null||F(_n)}!H&&we.length&&We!=="inline"&&Me(fs)},Gn=fu(function(ct){re==null||re(ev(ct)),dr(ct)}),fr=fu(function(ct,Rt){var Ht=we.filter(function(an){return an!==ct});if(Rt)Ht.push(ct);else if(We!=="inline"){var Jt=lt(ct);Ht=Ht.filter(function(an){return!Jt.has(an)})}zi(we,Ht,!0)||Me(Ht,!0)}),pr=function(Rt,Ht){var Jt=Ht??!we.includes(Rt);fr(Rt,Jt)},Qr=X7(We,Sn,Ye,Ve,Ze,ut,un,Xn,pr,pe);d.useEffect(function(){rt(!0)},[]);var vr=d.useMemo(function(){return{_internalRenderMenuItem:Oe,_internalRenderSubMenuItem:ge}},[Oe,ge]),Tn=We!=="horizontal"||E?me:me.map(function(ct,Rt){return d.createElement(ad,{key:ct.key,overflowDisabled:Rt>_t},ct)}),Vt=d.createElement(Di,$e({id:b,ref:Ze,prefixCls:"".concat(i,"-overflow"),component:"ul",itemComponent:Zv,className:ie(i,"".concat(i,"-root"),"".concat(i,"-").concat(We),c,K(K({},"".concat(i,"-inline-collapsed"),qe),"".concat(i,"-rtl"),Ye),a),dir:m,style:s,role:"menu",tabIndex:p,data:Tn,renderRawItem:function(Rt){return Rt},renderRawRest:function(Rt){var Ht=Rt.length,Jt=Ht?me.slice(-Ht):null;return d.createElement(Jv,{eventKey:Nb,title:ne,disabled:dt,internalPopupClose:Ht===0,popupClassName:ae},Jt)},maxCount:We!=="horizontal"||E?Di.INVALIDATE:Di.RESPONSIVE,ssr:"full","data-menu-list":!0,onVisibleChange:function(Rt){pt(Rt)},onKeyDown:Qr},ye));return d.createElement(Tw.Provider,{value:vr},d.createElement(TP.Provider,{value:Ve},d.createElement(ad,{prefixCls:i,rootClassName:a,mode:We,openKeys:we,rtl:Ye,disabled:S,motion:et?G:null,defaultMotions:et?q:null,activeKey:Sn,onActive:or,onInactive:tr,selectedKeys:Zt,inlineIndent:W,subMenuOpenDelay:O,subMenuCloseDelay:T,forceSubMenuRender:M,builtinPlacements:Q,triggerSubMenuAction:Y,getPopupContainer:ee,itemIcon:te,expandIcon:ce,onItemClick:Gn,onOpenChange:fr},d.createElement(DP.Provider,{value:Qt},Vt),d.createElement("div",{style:{display:"none"},"aria-hidden":!0},d.createElement(NP.Provider,{value:gt},Ie)))))}),Id=x9;Id.Item=Zv;Id.SubMenu=Jv;Id.ItemGroup=jw;Id.Divider=Dw;var S9={icon:{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M724 218.3V141c0-6.7-7.7-10.4-12.9-6.3L260.3 486.8a31.86 31.86 0 000 50.3l450.8 352.1c5.3 4.1 12.9.4 12.9-6.3v-77.3c0-4.9-2.3-9.6-6.1-12.6l-360-281 360-281.1c3.8-3 6.1-7.7 6.1-12.6z"}}]},name:"left",theme:"outlined"},C9=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:S9}))},Db=d.forwardRef(C9);const HP=d.createContext({});var E9={icon:{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M176 511a56 56 0 10112 0 56 56 0 10-112 0zm280 0a56 56 0 10112 0 56 56 0 10-112 0zm280 0a56 56 0 10112 0 56 56 0 10-112 0z"}}]},name:"ellipsis",theme:"outlined"},k9=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:E9}))},FP=d.forwardRef(k9);const tv=d.createContext({prefixCls:"",firstLevel:!0,inlineCollapsed:!1});var O9=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{const{prefixCls:t,className:n,dashed:r}=e,o=O9(e,["prefixCls","className","dashed"]),{getPrefixCls:i}=d.useContext(ht),a=i("menu",t),s=ie({[`${a}-item-divider-dashed`]:!!r},n);return d.createElement(Dw,Object.assign({className:s},o))},VP=e=>{var t;const{className:n,children:r,icon:o,title:i,danger:a}=e,{prefixCls:s,firstLevel:c,direction:u,disableMenuItemTitleTooltip:p,inlineCollapsed:v}=d.useContext(tv),h=S=>{const E=r==null?void 0:r[0],k=d.createElement("span",{className:`${s}-title-content`},r);return(!o||d.isValidElement(r)&&r.type==="span")&&r&&S&&c&&typeof E=="string"?d.createElement("div",{className:`${s}-inline-collapsed-noicon`},E.charAt(0)):k},{siderCollapsed:m}=d.useContext(HP);let b=i;typeof i>"u"?b=c?r:"":i===!1&&(b="");const y={title:b};!m&&!v&&(y.title=null,y.open=!1);const w=lo(r).length;let C=d.createElement(Zv,Object.assign({},Ln(e,["title","icon","danger"]),{className:ie({[`${s}-item-danger`]:a,[`${s}-item-only-child`]:(o?w+1:w)===1},n),title:typeof i=="string"?i:void 0}),Dr(o,{className:ie(d.isValidElement(o)?(t=o.props)===null||t===void 0?void 0:t.className:"",`${s}-item-icon`)}),h(v));return p||(C=d.createElement(gi,Object.assign({},y,{placement:u==="rtl"?"left":"right",overlayClassName:`${s}-inline-collapsed-tooltip`}),C)),C};var $9=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{const{children:n}=e,r=$9(e,["children"]),o=d.useContext(nv),i=d.useMemo(()=>Object.assign(Object.assign({},o),r),[o,r.prefixCls,r.mode,r.selectable,r.rootClassName]),a=PL(n),s=Bs(t,a?n.ref:null);return d.createElement(nv.Provider,{value:i},d.createElement(nd,{space:!0},a?d.cloneElement(n,{ref:s}):n))}),I9=e=>{const{componentCls:t,motionDurationSlow:n,horizontalLineHeight:r,colorSplit:o,lineWidth:i,lineType:a,itemPaddingInline:s}=e;return{[`${t}-horizontal`]:{lineHeight:r,border:0,borderBottom:`${de(i)} ${a} ${o}`,boxShadow:"none","&::after":{display:"block",clear:"both",height:0,content:'"\\20"'},[`${t}-item, ${t}-submenu`]:{position:"relative",display:"inline-block",verticalAlign:"bottom",paddingInline:s},[`> ${t}-item:hover, - > ${t}-item-active, - > ${t}-submenu ${t}-submenu-title:hover`]:{backgroundColor:"transparent"},[`${t}-item, ${t}-submenu-title`]:{transition:[`border-color ${n}`,`background ${n}`].join(",")},[`${t}-submenu-arrow`]:{display:"none"}}}},T9=e=>{let{componentCls:t,menuArrowOffset:n,calc:r}=e;return{[`${t}-rtl`]:{direction:"rtl"},[`${t}-submenu-rtl`]:{transformOrigin:"100% 0"},[`${t}-rtl${t}-vertical, - ${t}-submenu-rtl ${t}-vertical`]:{[`${t}-submenu-arrow`]:{"&::before":{transform:`rotate(-45deg) translateY(${de(r(n).mul(-1).equal())})`},"&::after":{transform:`rotate(45deg) translateY(${de(n)})`}}}}},mk=e=>Object.assign({},qa(e)),bk=(e,t)=>{const{componentCls:n,itemColor:r,itemSelectedColor:o,groupTitleColor:i,itemBg:a,subMenuItemBg:s,itemSelectedBg:c,activeBarHeight:u,activeBarWidth:p,activeBarBorderWidth:v,motionDurationSlow:h,motionEaseInOut:m,motionEaseOut:b,itemPaddingInline:y,motionDurationMid:w,itemHoverColor:C,lineType:S,colorSplit:E,itemDisabledColor:k,dangerItemColor:O,dangerItemHoverColor:$,dangerItemSelectedColor:T,dangerItemActiveBg:M,dangerItemSelectedBg:P,popupBg:R,itemHoverBg:A,itemActiveBg:V,menuSubMenuBg:z,horizontalItemSelectedColor:B,horizontalItemSelectedBg:_,horizontalItemBorderRadius:H,horizontalItemHoverBg:j}=e;return{[`${n}-${t}, ${n}-${t} > ${n}`]:{color:r,background:a,[`&${n}-root:focus-visible`]:Object.assign({},mk(e)),[`${n}-item-group-title`]:{color:i},[`${n}-submenu-selected`]:{[`> ${n}-submenu-title`]:{color:o}},[`${n}-item, ${n}-submenu-title`]:{color:r,[`&:not(${n}-item-disabled):focus-visible`]:Object.assign({},mk(e))},[`${n}-item-disabled, ${n}-submenu-disabled`]:{color:`${k} !important`},[`${n}-item:not(${n}-item-selected):not(${n}-submenu-selected)`]:{[`&:hover, > ${n}-submenu-title:hover`]:{color:C}},[`&:not(${n}-horizontal)`]:{[`${n}-item:not(${n}-item-selected)`]:{"&:hover":{backgroundColor:A},"&:active":{backgroundColor:V}},[`${n}-submenu-title`]:{"&:hover":{backgroundColor:A},"&:active":{backgroundColor:V}}},[`${n}-item-danger`]:{color:O,[`&${n}-item:hover`]:{[`&:not(${n}-item-selected):not(${n}-submenu-selected)`]:{color:$}},[`&${n}-item:active`]:{background:M}},[`${n}-item a`]:{"&, &:hover":{color:"inherit"}},[`${n}-item-selected`]:{color:o,[`&${n}-item-danger`]:{color:T},"a, a:hover":{color:"inherit"}},[`& ${n}-item-selected`]:{backgroundColor:c,[`&${n}-item-danger`]:{backgroundColor:P}},[`&${n}-submenu > ${n}`]:{backgroundColor:z},[`&${n}-popup > ${n}`]:{backgroundColor:R},[`&${n}-submenu-popup > ${n}`]:{backgroundColor:R},[`&${n}-horizontal`]:Object.assign(Object.assign({},t==="dark"?{borderBottom:0}:{}),{[`> ${n}-item, > ${n}-submenu`]:{top:v,marginTop:e.calc(v).mul(-1).equal(),marginBottom:0,borderRadius:H,"&::after":{position:"absolute",insetInline:y,bottom:0,borderBottom:`${de(u)} solid transparent`,transition:`border-color ${h} ${m}`,content:'""'},"&:hover, &-active, &-open":{background:j,"&::after":{borderBottomWidth:u,borderBottomColor:B}},"&-selected":{color:B,backgroundColor:_,"&:hover":{backgroundColor:_},"&::after":{borderBottomWidth:u,borderBottomColor:B}}}}),[`&${n}-root`]:{[`&${n}-inline, &${n}-vertical`]:{borderInlineEnd:`${de(v)} ${S} ${E}`}},[`&${n}-inline`]:{[`${n}-sub${n}-inline`]:{background:s},[`${n}-item`]:{position:"relative","&::after":{position:"absolute",insetBlock:0,insetInlineEnd:0,borderInlineEnd:`${de(p)} solid ${o}`,transform:"scaleY(0.0001)",opacity:0,transition:[`transform ${w} ${b}`,`opacity ${w} ${b}`].join(","),content:'""'},[`&${n}-item-danger`]:{"&::after":{borderInlineEndColor:T}}},[`${n}-selected, ${n}-item-selected`]:{"&::after":{transform:"scaleY(1)",opacity:1,transition:[`transform ${w} ${m}`,`opacity ${w} ${m}`].join(",")}}}}}},yk=e=>{const{componentCls:t,itemHeight:n,itemMarginInline:r,padding:o,menuArrowSize:i,marginXS:a,itemMarginBlock:s,itemWidth:c,itemPaddingInline:u}=e,p=e.calc(i).add(o).add(a).equal();return{[`${t}-item`]:{position:"relative",overflow:"hidden"},[`${t}-item, ${t}-submenu-title`]:{height:n,lineHeight:de(n),paddingInline:u,overflow:"hidden",textOverflow:"ellipsis",marginInline:r,marginBlock:s,width:c},[`> ${t}-item, - > ${t}-submenu > ${t}-submenu-title`]:{height:n,lineHeight:de(n)},[`${t}-item-group-list ${t}-submenu-title, - ${t}-submenu-title`]:{paddingInlineEnd:p}}},P9=e=>{const{componentCls:t,iconCls:n,itemHeight:r,colorTextLightSolid:o,dropdownWidth:i,controlHeightLG:a,motionEaseOut:s,paddingXL:c,itemMarginInline:u,fontSizeLG:p,motionDurationFast:v,motionDurationSlow:h,paddingXS:m,boxShadowSecondary:b,collapsedWidth:y,collapsedIconSize:w}=e,C={height:r,lineHeight:de(r),listStylePosition:"inside",listStyleType:"disc"};return[{[t]:{"&-inline, &-vertical":Object.assign({[`&${t}-root`]:{boxShadow:"none"}},yk(e))},[`${t}-submenu-popup`]:{[`${t}-vertical`]:Object.assign(Object.assign({},yk(e)),{boxShadow:b})}},{[`${t}-submenu-popup ${t}-vertical${t}-sub`]:{minWidth:i,maxHeight:`calc(100vh - ${de(e.calc(a).mul(2.5).equal())})`,padding:"0",overflow:"hidden",borderInlineEnd:0,"&:not([class*='-active'])":{overflowX:"hidden",overflowY:"auto"}}},{[`${t}-inline`]:{width:"100%",[`&${t}-root`]:{[`${t}-item, ${t}-submenu-title`]:{display:"flex",alignItems:"center",transition:[`border-color ${h}`,`background ${h}`,`padding ${v} ${s}`].join(","),[`> ${t}-title-content`]:{flex:"auto",minWidth:0,overflow:"hidden",textOverflow:"ellipsis"},"> *":{flex:"none"}}},[`${t}-sub${t}-inline`]:{padding:0,border:0,borderRadius:0,boxShadow:"none",[`& > ${t}-submenu > ${t}-submenu-title`]:C,[`& ${t}-item-group-title`]:{paddingInlineStart:c}},[`${t}-item`]:C}},{[`${t}-inline-collapsed`]:{width:y,[`&${t}-root`]:{[`${t}-item, ${t}-submenu ${t}-submenu-title`]:{[`> ${t}-inline-collapsed-noicon`]:{fontSize:p,textAlign:"center"}}},[`> ${t}-item, - > ${t}-item-group > ${t}-item-group-list > ${t}-item, - > ${t}-item-group > ${t}-item-group-list > ${t}-submenu > ${t}-submenu-title, - > ${t}-submenu > ${t}-submenu-title`]:{insetInlineStart:0,paddingInline:`calc(50% - ${de(e.calc(p).div(2).equal())} - ${de(u)})`,textOverflow:"clip",[` - ${t}-submenu-arrow, - ${t}-submenu-expand-icon - `]:{opacity:0},[`${t}-item-icon, ${n}`]:{margin:0,fontSize:w,lineHeight:de(r),"+ span":{display:"inline-block",opacity:0}}},[`${t}-item-icon, ${n}`]:{display:"inline-block"},"&-tooltip":{pointerEvents:"none",[`${t}-item-icon, ${n}`]:{display:"none"},"a, a:hover":{color:o}},[`${t}-item-group-title`]:Object.assign(Object.assign({},Ka),{paddingInline:m})}}]},wk=e=>{const{componentCls:t,motionDurationSlow:n,motionDurationMid:r,motionEaseInOut:o,motionEaseOut:i,iconCls:a,iconSize:s,iconMarginInlineEnd:c}=e;return{[`${t}-item, ${t}-submenu-title`]:{position:"relative",display:"block",margin:0,whiteSpace:"nowrap",cursor:"pointer",transition:[`border-color ${n}`,`background ${n}`,`padding calc(${n} + 0.1s) ${o}`].join(","),[`${t}-item-icon, ${a}`]:{minWidth:s,fontSize:s,transition:[`font-size ${r} ${i}`,`margin ${n} ${o}`,`color ${n}`].join(","),"+ span":{marginInlineStart:c,opacity:1,transition:[`opacity ${n} ${o}`,`margin ${n}`,`color ${n}`].join(",")}},[`${t}-item-icon`]:Object.assign({},Mv()),[`&${t}-item-only-child`]:{[`> ${a}, > ${t}-item-icon`]:{marginInlineEnd:0}}},[`${t}-item-disabled, ${t}-submenu-disabled`]:{background:"none !important",cursor:"not-allowed","&::after":{borderColor:"transparent !important"},a:{color:"inherit !important"},[`> ${t}-submenu-title`]:{color:"inherit !important",cursor:"not-allowed"}}}},xk=e=>{const{componentCls:t,motionDurationSlow:n,motionEaseInOut:r,borderRadius:o,menuArrowSize:i,menuArrowOffset:a}=e;return{[`${t}-submenu`]:{"&-expand-icon, &-arrow":{position:"absolute",top:"50%",insetInlineEnd:e.margin,width:i,color:"currentcolor",transform:"translateY(-50%)",transition:`transform ${n} ${r}, opacity ${n}`},"&-arrow":{"&::before, &::after":{position:"absolute",width:e.calc(i).mul(.6).equal(),height:e.calc(i).mul(.15).equal(),backgroundColor:"currentcolor",borderRadius:o,transition:[`background ${n} ${r}`,`transform ${n} ${r}`,`top ${n} ${r}`,`color ${n} ${r}`].join(","),content:'""'},"&::before":{transform:`rotate(45deg) translateY(${de(e.calc(a).mul(-1).equal())})`},"&::after":{transform:`rotate(-45deg) translateY(${de(a)})`}}}}},M9=e=>{const{antCls:t,componentCls:n,fontSize:r,motionDurationSlow:o,motionDurationMid:i,motionEaseInOut:a,paddingXS:s,padding:c,colorSplit:u,lineWidth:p,zIndexPopup:v,borderRadiusLG:h,subMenuItemBorderRadius:m,menuArrowSize:b,menuArrowOffset:y,lineType:w,groupTitleLineHeight:C,groupTitleFontSize:S}=e;return[{"":{[n]:Object.assign(Object.assign({},Ps()),{"&-hidden":{display:"none"}})},[`${n}-submenu-hidden`]:{display:"none"}},{[n]:Object.assign(Object.assign(Object.assign(Object.assign(Object.assign(Object.assign(Object.assign({},jn(e)),Ps()),{marginBottom:0,paddingInlineStart:0,fontSize:r,lineHeight:0,listStyle:"none",outline:"none",transition:`width ${o} cubic-bezier(0.2, 0, 0, 1) 0s`,"ul, ol":{margin:0,padding:0,listStyle:"none"},"&-overflow":{display:"flex",[`${n}-item`]:{flex:"none"}},[`${n}-item, ${n}-submenu, ${n}-submenu-title`]:{borderRadius:e.itemBorderRadius},[`${n}-item-group-title`]:{padding:`${de(s)} ${de(c)}`,fontSize:S,lineHeight:C,transition:`all ${o}`},[`&-horizontal ${n}-submenu`]:{transition:[`border-color ${o} ${a}`,`background ${o} ${a}`].join(",")},[`${n}-submenu, ${n}-submenu-inline`]:{transition:[`border-color ${o} ${a}`,`background ${o} ${a}`,`padding ${i} ${a}`].join(",")},[`${n}-submenu ${n}-sub`]:{cursor:"initial",transition:[`background ${o} ${a}`,`padding ${o} ${a}`].join(",")},[`${n}-title-content`]:{display:"inline-flex",alignItems:"center",transition:`color ${o}`,"> a:first-child":{flexGrow:1},[`> ${t}-typography-ellipsis-single-line`]:{display:"inline",verticalAlign:"unset"},[`${n}-item-extra`]:{marginInlineStart:"auto",paddingInlineStart:e.padding,fontSize:e.fontSizeSM,color:e.colorTextDescription}},[`${n}-item a`]:{"&::before":{position:"absolute",inset:0,backgroundColor:"transparent",content:'""'}},[`${n}-item-divider`]:{overflow:"hidden",lineHeight:0,borderColor:u,borderStyle:w,borderWidth:0,borderTopWidth:p,marginBlock:p,padding:0,"&-dashed":{borderStyle:"dashed"}}}),wk(e)),{[`${n}-item-group`]:{[`${n}-item-group-list`]:{margin:0,padding:0,[`${n}-item, ${n}-submenu-title`]:{paddingInline:`${de(e.calc(r).mul(2).equal())} ${de(c)}`}}},"&-submenu":{"&-popup":{position:"absolute",zIndex:v,borderRadius:h,boxShadow:"none",transformOrigin:"0 0",[`&${n}-submenu`]:{background:"transparent"},"&::before":{position:"absolute",inset:0,zIndex:-1,width:"100%",height:"100%",opacity:0,content:'""'},[`> ${n}`]:Object.assign(Object.assign(Object.assign({borderRadius:h},wk(e)),xk(e)),{[`${n}-item, ${n}-submenu > ${n}-submenu-title`]:{borderRadius:m},[`${n}-submenu-title::after`]:{transition:`transform ${o} ${a}`}})},"\n &-placement-leftTop,\n &-placement-bottomRight,\n ":{transformOrigin:"100% 0"},"\n &-placement-leftBottom,\n &-placement-topRight,\n ":{transformOrigin:"100% 100%"},"\n &-placement-rightBottom,\n &-placement-topLeft,\n ":{transformOrigin:"0 100%"},"\n &-placement-bottomLeft,\n &-placement-rightTop,\n ":{transformOrigin:"0 0"},"\n &-placement-leftTop,\n &-placement-leftBottom\n ":{paddingInlineEnd:e.paddingXS},"\n &-placement-rightTop,\n &-placement-rightBottom\n ":{paddingInlineStart:e.paddingXS},"\n &-placement-topRight,\n &-placement-topLeft\n ":{paddingBottom:e.paddingXS},"\n &-placement-bottomRight,\n &-placement-bottomLeft\n ":{paddingTop:e.paddingXS}}}),xk(e)),{[`&-inline-collapsed ${n}-submenu-arrow, - &-inline ${n}-submenu-arrow`]:{"&::before":{transform:`rotate(-45deg) translateX(${de(y)})`},"&::after":{transform:`rotate(45deg) translateX(${de(e.calc(y).mul(-1).equal())})`}},[`${n}-submenu-open${n}-submenu-inline > ${n}-submenu-title > ${n}-submenu-arrow`]:{transform:`translateY(${de(e.calc(b).mul(.2).mul(-1).equal())})`,"&::after":{transform:`rotate(-45deg) translateX(${de(e.calc(y).mul(-1).equal())})`},"&::before":{transform:`rotate(45deg) translateX(${de(y)})`}}})},{[`${t}-layout-header`]:{[n]:{lineHeight:"inherit"}}}]},N9=e=>{var t,n,r;const{colorPrimary:o,colorError:i,colorTextDisabled:a,colorErrorBg:s,colorText:c,colorTextDescription:u,colorBgContainer:p,colorFillAlter:v,colorFillContent:h,lineWidth:m,lineWidthBold:b,controlItemBgActive:y,colorBgTextHover:w,controlHeightLG:C,lineHeight:S,colorBgElevated:E,marginXXS:k,padding:O,fontSize:$,controlHeightSM:T,fontSizeLG:M,colorTextLightSolid:P,colorErrorHover:R}=e,A=(t=e.activeBarWidth)!==null&&t!==void 0?t:0,V=(n=e.activeBarBorderWidth)!==null&&n!==void 0?n:m,z=(r=e.itemMarginInline)!==null&&r!==void 0?r:e.marginXXS,B=new xn(P).setAlpha(.65).toRgbString();return{dropdownWidth:160,zIndexPopup:e.zIndexPopupBase+50,radiusItem:e.borderRadiusLG,itemBorderRadius:e.borderRadiusLG,radiusSubMenuItem:e.borderRadiusSM,subMenuItemBorderRadius:e.borderRadiusSM,colorItemText:c,itemColor:c,colorItemTextHover:c,itemHoverColor:c,colorItemTextHoverHorizontal:o,horizontalItemHoverColor:o,colorGroupTitle:u,groupTitleColor:u,colorItemTextSelected:o,itemSelectedColor:o,colorItemTextSelectedHorizontal:o,horizontalItemSelectedColor:o,colorItemBg:p,itemBg:p,colorItemBgHover:w,itemHoverBg:w,colorItemBgActive:h,itemActiveBg:y,colorSubItemBg:v,subMenuItemBg:v,colorItemBgSelected:y,itemSelectedBg:y,colorItemBgSelectedHorizontal:"transparent",horizontalItemSelectedBg:"transparent",colorActiveBarWidth:0,activeBarWidth:A,colorActiveBarHeight:b,activeBarHeight:b,colorActiveBarBorderSize:m,activeBarBorderWidth:V,colorItemTextDisabled:a,itemDisabledColor:a,colorDangerItemText:i,dangerItemColor:i,colorDangerItemTextHover:i,dangerItemHoverColor:i,colorDangerItemTextSelected:i,dangerItemSelectedColor:i,colorDangerItemBgActive:s,dangerItemActiveBg:s,colorDangerItemBgSelected:s,dangerItemSelectedBg:s,itemMarginInline:z,horizontalItemBorderRadius:0,horizontalItemHoverBg:"transparent",itemHeight:C,groupTitleLineHeight:S,collapsedWidth:C*2,popupBg:E,itemMarginBlock:k,itemPaddingInline:O,horizontalLineHeight:`${C*1.15}px`,iconSize:$,iconMarginInlineEnd:T-$,collapsedIconSize:M,groupTitleFontSize:$,darkItemDisabledColor:new xn(P).setAlpha(.25).toRgbString(),darkItemColor:B,darkDangerItemColor:i,darkItemBg:"#001529",darkPopupBg:"#001529",darkSubMenuItemBg:"#000c17",darkItemSelectedColor:P,darkItemSelectedBg:o,darkDangerItemSelectedBg:i,darkItemHoverBg:"transparent",darkGroupTitleColor:B,darkItemHoverColor:P,darkDangerItemHoverColor:R,darkDangerItemSelectedColor:P,darkDangerItemActiveBg:i,itemWidth:A?`calc(100% + ${V}px)`:`calc(100% - ${z*2}px)`}},R9=function(e){let t=arguments.length>1&&arguments[1]!==void 0?arguments[1]:e,n=arguments.length>2&&arguments[2]!==void 0?arguments[2]:!0;return In("Menu",o=>{const{colorBgElevated:i,controlHeightLG:a,fontSize:s,darkItemColor:c,darkDangerItemColor:u,darkItemBg:p,darkSubMenuItemBg:v,darkItemSelectedColor:h,darkItemSelectedBg:m,darkDangerItemSelectedBg:b,darkItemHoverBg:y,darkGroupTitleColor:w,darkItemHoverColor:C,darkItemDisabledColor:S,darkDangerItemHoverColor:E,darkDangerItemSelectedColor:k,darkDangerItemActiveBg:O,popupBg:$,darkPopupBg:T}=o,M=o.calc(s).div(7).mul(5).equal(),P=vn(o,{menuArrowSize:M,menuHorizontalHeight:o.calc(a).mul(1.15).equal(),menuArrowOffset:o.calc(M).mul(.25).equal(),menuSubMenuBg:i,calc:o.calc,popupBg:$}),R=vn(P,{itemColor:c,itemHoverColor:C,groupTitleColor:w,itemSelectedColor:h,itemBg:p,popupBg:T,subMenuItemBg:v,itemActiveBg:"transparent",itemSelectedBg:m,activeBarHeight:0,activeBarBorderWidth:0,itemHoverBg:y,itemDisabledColor:S,dangerItemColor:u,dangerItemHoverColor:E,dangerItemSelectedColor:k,dangerItemActiveBg:O,dangerItemSelectedBg:b,menuSubMenuBg:v,horizontalItemSelectedColor:h,horizontalItemSelectedBg:m});return[M9(P),I9(P),P9(P),bk(P,"light"),bk(R,"dark"),T9(P),zv(P),Gl(P,"slide-up"),Gl(P,"slide-down"),Sd(P,"zoom-big")]},N9,{deprecatedTokens:[["colorGroupTitle","groupTitleColor"],["radiusItem","itemBorderRadius"],["radiusSubMenuItem","subMenuItemBorderRadius"],["colorItemText","itemColor"],["colorItemTextHover","itemHoverColor"],["colorItemTextHoverHorizontal","horizontalItemHoverColor"],["colorItemTextSelected","itemSelectedColor"],["colorItemTextSelectedHorizontal","horizontalItemSelectedColor"],["colorItemTextDisabled","itemDisabledColor"],["colorDangerItemText","dangerItemColor"],["colorDangerItemTextHover","dangerItemHoverColor"],["colorDangerItemTextSelected","dangerItemSelectedColor"],["colorDangerItemBgActive","dangerItemActiveBg"],["colorDangerItemBgSelected","dangerItemSelectedBg"],["colorItemBg","itemBg"],["colorItemBgHover","itemHoverBg"],["colorSubItemBg","subMenuItemBg"],["colorItemBgActive","itemActiveBg"],["colorItemBgSelectedHorizontal","horizontalItemSelectedBg"],["colorActiveBarWidth","activeBarWidth"],["colorActiveBarHeight","activeBarHeight"],["colorActiveBarBorderSize","activeBarBorderWidth"],["colorItemBgSelected","itemSelectedBg"]],injectStyle:n,unitless:{groupTitleLineHeight:!0}})(e,t)},UP=e=>{var t;const{popupClassName:n,icon:r,title:o,theme:i}=e,a=d.useContext(tv),{prefixCls:s,inlineCollapsed:c,theme:u}=a,p=fc();let v;if(!r)v=c&&!p.length&&o&&typeof o=="string"?d.createElement("div",{className:`${s}-inline-collapsed-noicon`},o.charAt(0)):d.createElement("span",{className:`${s}-title-content`},o);else{const b=d.isValidElement(o)&&o.type==="span";v=d.createElement(d.Fragment,null,Dr(r,{className:ie(d.isValidElement(r)?(t=r.props)===null||t===void 0?void 0:t.className:"",`${s}-item-icon`)}),b?o:d.createElement("span",{className:`${s}-title-content`},o))}const h=d.useMemo(()=>Object.assign(Object.assign({},a),{firstLevel:!1}),[a]),[m]=sc("Menu");return d.createElement(tv.Provider,{value:h},d.createElement(Jv,Object.assign({},Ln(e,["icon"]),{title:v,popupClassName:ie(s,n,`${s}-${i||u}`),popupStyle:Object.assign({zIndex:m},e.popupStyle)})))};var D9=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{var n;const r=d.useContext(nv),o=r||{},{getPrefixCls:i,getPopupContainer:a,direction:s,menu:c}=d.useContext(ht),u=i(),{prefixCls:p,className:v,style:h,theme:m="light",expandIcon:b,_internalDisableMenuItemTitleTooltip:y,inlineCollapsed:w,siderCollapsed:C,rootClassName:S,mode:E,selectable:k,onClick:O,overflowedIndicatorPopupClassName:$}=e,T=D9(e,["prefixCls","className","style","theme","expandIcon","_internalDisableMenuItemTitleTooltip","inlineCollapsed","siderCollapsed","rootClassName","mode","selectable","onClick","overflowedIndicatorPopupClassName"]),M=Ln(T,["collapsedWidth"]);(n=o.validator)===null||n===void 0||n.call(o,{mode:E});const P=gn(function(){var W;O==null||O.apply(void 0,arguments),(W=o.onClick)===null||W===void 0||W.call(o)}),R=o.mode||E,A=k??o.selectable,V=d.useMemo(()=>C!==void 0?C:w,[w,C]),z={horizontal:{motionName:`${u}-slide-up`},inline:Ju(u),other:{motionName:`${u}-zoom-big`}},B=i("menu",p||o.prefixCls),_=br(B),[H,j,L]=R9(B,_,!r),F=ie(`${B}-${m}`,c==null?void 0:c.className,v),U=d.useMemo(()=>{var W,G;if(typeof b=="function"||Rm(b))return b||null;if(typeof o.expandIcon=="function"||Rm(o.expandIcon))return o.expandIcon||null;if(typeof(c==null?void 0:c.expandIcon)=="function"||Rm(c==null?void 0:c.expandIcon))return(c==null?void 0:c.expandIcon)||null;const q=(W=b??(o==null?void 0:o.expandIcon))!==null&&W!==void 0?W:c==null?void 0:c.expandIcon;return Dr(q,{className:ie(`${B}-submenu-expand-icon`,d.isValidElement(q)?(G=q.props)===null||G===void 0?void 0:G.className:void 0)})},[b,o==null?void 0:o.expandIcon,c==null?void 0:c.expandIcon,B]),D=d.useMemo(()=>({prefixCls:B,inlineCollapsed:V||!1,direction:s,firstLevel:!0,theme:m,mode:R,disableMenuItemTitleTooltip:y}),[B,V,s,y,m]);return H(d.createElement(nv.Provider,{value:null},d.createElement(tv.Provider,{value:D},d.createElement(Id,Object.assign({getPopupContainer:a,overflowedIndicator:d.createElement(FP,null),overflowedIndicatorPopupClassName:ie(B,`${B}-${m}`,$),mode:R,selectable:A,onClick:P},M,{inlineCollapsed:V,style:Object.assign(Object.assign({},c==null?void 0:c.style),h),className:F,prefixCls:B,direction:s,defaultMotions:z,expandIcon:U,ref:t,rootClassName:ie(S,j,o.rootClassName,L,_),_internalComponents:j9})))))}),pc=d.forwardRef((e,t)=>{const n=d.useRef(null),r=d.useContext(HP);return d.useImperativeHandle(t,()=>({menu:n.current,focus:o=>{var i;(i=n.current)===null||i===void 0||i.focus(o)}})),d.createElement(L9,Object.assign({ref:n},e,r))});pc.Item=VP;pc.SubMenu=UP;pc.Divider=_P;pc.ItemGroup=jw;const B9=e=>{const{componentCls:t,menuCls:n,colorError:r,colorTextLightSolid:o}=e,i=`${n}-item`;return{[`${t}, ${t}-menu-submenu`]:{[`${n} ${i}`]:{[`&${i}-danger:not(${i}-disabled)`]:{color:r,"&:hover":{color:o,backgroundColor:r}}}}}},A9=e=>{const{componentCls:t,menuCls:n,zIndexPopup:r,dropdownArrowDistance:o,sizePopupArrow:i,antCls:a,iconCls:s,motionDurationMid:c,paddingBlock:u,fontSize:p,dropdownEdgeChildPadding:v,colorTextDisabled:h,fontSizeIcon:m,controlPaddingHorizontal:b,colorBgElevated:y}=e;return[{[t]:{position:"absolute",top:-9999,left:{_skip_check_:!0,value:-9999},zIndex:r,display:"block","&::before":{position:"absolute",insetBlock:e.calc(i).div(2).sub(o).equal(),zIndex:-9999,opacity:1e-4,content:'""'},"&-menu-vertical":{maxHeight:"100vh",overflowY:"auto"},[`&-trigger${a}-btn`]:{[`& > ${s}-down, & > ${a}-btn-icon > ${s}-down`]:{fontSize:m}},[`${t}-wrap`]:{position:"relative",[`${a}-btn > ${s}-down`]:{fontSize:m},[`${s}-down::before`]:{transition:`transform ${c}`}},[`${t}-wrap-open`]:{[`${s}-down::before`]:{transform:"rotate(180deg)"}},"\n &-hidden,\n &-menu-hidden,\n &-menu-submenu-hidden\n ":{display:"none"},[`&${a}-slide-down-enter${a}-slide-down-enter-active${t}-placement-bottomLeft, - &${a}-slide-down-appear${a}-slide-down-appear-active${t}-placement-bottomLeft, - &${a}-slide-down-enter${a}-slide-down-enter-active${t}-placement-bottom, - &${a}-slide-down-appear${a}-slide-down-appear-active${t}-placement-bottom, - &${a}-slide-down-enter${a}-slide-down-enter-active${t}-placement-bottomRight, - &${a}-slide-down-appear${a}-slide-down-appear-active${t}-placement-bottomRight`]:{animationName:tw},[`&${a}-slide-up-enter${a}-slide-up-enter-active${t}-placement-topLeft, - &${a}-slide-up-appear${a}-slide-up-appear-active${t}-placement-topLeft, - &${a}-slide-up-enter${a}-slide-up-enter-active${t}-placement-top, - &${a}-slide-up-appear${a}-slide-up-appear-active${t}-placement-top, - &${a}-slide-up-enter${a}-slide-up-enter-active${t}-placement-topRight, - &${a}-slide-up-appear${a}-slide-up-appear-active${t}-placement-topRight`]:{animationName:rw},[`&${a}-slide-down-leave${a}-slide-down-leave-active${t}-placement-bottomLeft, - &${a}-slide-down-leave${a}-slide-down-leave-active${t}-placement-bottom, - &${a}-slide-down-leave${a}-slide-down-leave-active${t}-placement-bottomRight`]:{animationName:nw},[`&${a}-slide-up-leave${a}-slide-up-leave-active${t}-placement-topLeft, - &${a}-slide-up-leave${a}-slide-up-leave-active${t}-placement-top, - &${a}-slide-up-leave${a}-slide-up-leave-active${t}-placement-topRight`]:{animationName:ow}}},Iw(e,y,{arrowPlacement:{top:!0,bottom:!0}}),{[`${t} ${n}`]:{position:"relative",margin:0},[`${n}-submenu-popup`]:{position:"absolute",zIndex:r,background:"transparent",boxShadow:"none",transformOrigin:"0 0","ul, li":{listStyle:"none",margin:0}},[`${t}, ${t}-menu-submenu`]:Object.assign(Object.assign({},jn(e)),{[n]:Object.assign(Object.assign({padding:v,listStyleType:"none",backgroundColor:y,backgroundClip:"padding-box",borderRadius:e.borderRadiusLG,outline:"none",boxShadow:e.boxShadowSecondary},Xl(e)),{"&:empty":{padding:0,boxShadow:"none"},[`${n}-item-group-title`]:{padding:`${de(u)} ${de(b)}`,color:e.colorTextDescription,transition:`all ${c}`},[`${n}-item`]:{position:"relative",display:"flex",alignItems:"center"},[`${n}-item-icon`]:{minWidth:p,marginInlineEnd:e.marginXS,fontSize:e.fontSizeSM},[`${n}-title-content`]:{display:"flex",alignItems:"center",flex:"auto","> a":{color:"inherit",transition:`all ${c}`,"&:hover":{color:"inherit"},"&::after":{position:"absolute",inset:0,content:'""'}},[`${n}-item-extra`]:{paddingInlineStart:e.padding,marginInlineStart:"auto",fontSize:e.fontSizeSM,color:e.colorTextDescription}},[`${n}-item, ${n}-submenu-title`]:Object.assign(Object.assign({display:"flex",margin:0,padding:`${de(u)} ${de(b)}`,color:e.colorText,fontWeight:"normal",fontSize:p,lineHeight:e.lineHeight,cursor:"pointer",transition:`all ${c}`,borderRadius:e.borderRadiusSM,"&:hover, &-active":{backgroundColor:e.controlItemBgHover}},Xl(e)),{"&-selected":{color:e.colorPrimary,backgroundColor:e.controlItemBgActive,"&:hover, &-active":{backgroundColor:e.controlItemBgActiveHover}},"&-disabled":{color:h,cursor:"not-allowed","&:hover":{color:h,backgroundColor:y,cursor:"not-allowed"},a:{pointerEvents:"none"}},"&-divider":{height:1,margin:`${de(e.marginXXS)} 0`,overflow:"hidden",lineHeight:0,backgroundColor:e.colorSplit},[`${t}-menu-submenu-expand-icon`]:{position:"absolute",insetInlineEnd:e.paddingXS,[`${t}-menu-submenu-arrow-icon`]:{marginInlineEnd:"0 !important",color:e.colorTextDescription,fontSize:m,fontStyle:"normal"}}}),[`${n}-item-group-list`]:{margin:`0 ${de(e.marginXS)}`,padding:0,listStyle:"none"},[`${n}-submenu-title`]:{paddingInlineEnd:e.calc(b).add(e.fontSizeSM).equal()},[`${n}-submenu-vertical`]:{position:"relative"},[`${n}-submenu${n}-submenu-disabled ${t}-menu-submenu-title`]:{[`&, ${t}-menu-submenu-arrow-icon`]:{color:h,backgroundColor:y,cursor:"not-allowed"}},[`${n}-submenu-selected ${t}-menu-submenu-title`]:{color:e.colorPrimary}})})},[Gl(e,"slide-up"),Gl(e,"slide-down"),Qp(e,"move-up"),Qp(e,"move-down"),Sd(e,"zoom-big")]]},z9=e=>Object.assign(Object.assign({zIndexPopup:e.zIndexPopupBase+50,paddingBlock:(e.controlHeight-e.fontSize*e.lineHeight)/2},Yv({contentRadius:e.borderRadiusLG,limitVerticalRadius:!0})),$w(e)),H9=In("Dropdown",e=>{const{marginXXS:t,sizePopupArrow:n,paddingXXS:r,componentCls:o}=e,i=vn(e,{menuCls:`${o}-menu`,dropdownArrowDistance:e.calc(n).div(2).add(t).equal(),dropdownEdgeChildPadding:r});return[A9(i),B9(i)]},z9,{resetStyle:!1}),eh=e=>{var t;const{menu:n,arrow:r,prefixCls:o,children:i,trigger:a,disabled:s,dropdownRender:c,getPopupContainer:u,overlayClassName:p,rootClassName:v,overlayStyle:h,open:m,onOpenChange:b,visible:y,onVisibleChange:w,mouseEnterDelay:C=.15,mouseLeaveDelay:S=.1,autoAdjustOverflow:E=!0,placement:k="",overlay:O,transitionName:$}=e,{getPopupContainer:T,getPrefixCls:M,direction:P,dropdown:R}=d.useContext(ht);As();const A=d.useMemo(()=>{const ee=M();return $!==void 0?$:k.includes("top")?`${ee}-slide-down`:`${ee}-slide-up`},[M,k,$]),V=d.useMemo(()=>k?k.includes("Center")?k.slice(0,k.indexOf("Center")):k:P==="rtl"?"bottomRight":"bottomLeft",[k,P]),z=M("dropdown",o),B=br(z),[_,H,j]=H9(z,B),[,L]=Ir(),F=d.Children.only(i),U=Dr(F,{className:ie(`${z}-trigger`,{[`${z}-rtl`]:P==="rtl"},F.props.className),disabled:(t=F.props.disabled)!==null&&t!==void 0?t:s}),D=s?[]:a,W=!!(D!=null&&D.includes("contextMenu")),[G,q]=Dn(!1,{value:m??y}),J=gn(ee=>{b==null||b(ee,{source:"trigger"}),w==null||w(ee),q(ee)}),Y=ie(p,v,H,j,B,R==null?void 0:R.className,{[`${z}-rtl`]:P==="rtl"}),Q=xP({arrowPointAtCenter:typeof r=="object"&&r.pointAtCenter,autoAdjustOverflow:E,offset:L.marginXXS,arrowWidth:r?L.sizePopupArrow:0,borderRadius:L.borderRadius}),te=d.useCallback(()=>{n!=null&&n.selectable&&(n!=null&&n.multiple)||(b==null||b(!1,{source:"menu"}),q(!1))},[n==null?void 0:n.selectable,n==null?void 0:n.multiple]),ce=()=>{let ee;return n!=null&&n.items?ee=d.createElement(pc,Object.assign({},n)):typeof O=="function"?ee=O():ee=O,c&&(ee=c(ee)),ee=d.Children.only(typeof ee=="string"?d.createElement("span",null,ee):ee),d.createElement(WP,{prefixCls:`${z}-menu`,rootClassName:ie(j,B),expandIcon:d.createElement("span",{className:`${z}-menu-submenu-arrow`},d.createElement(Yp,{className:`${z}-menu-submenu-arrow-icon`})),mode:"vertical",selectable:!1,onClick:te,validator:re=>{}},ee)},[se,ne]=sc("Dropdown",h==null?void 0:h.zIndex);let ae=d.createElement(H7,Object.assign({alignPoint:W},Ln(e,["rootClassName"]),{mouseEnterDelay:C,mouseLeaveDelay:S,visible:G,builtinPlacements:Q,arrow:!!r,overlayClassName:Y,prefixCls:z,getPopupContainer:u||T,transitionName:A,trigger:D,overlay:ce,placement:V,onVisibleChange:J,overlayStyle:Object.assign(Object.assign(Object.assign({},R==null?void 0:R.style),h),{zIndex:se})}),U);return se&&(ae=d.createElement(Rv.Provider,{value:ne},ae)),_(ae)};function F9(e){return Object.assign(Object.assign({},e),{align:{overflow:{adjustX:!1,adjustY:!1}}})}const _9=bw(eh,"dropdown",e=>e,F9),V9=e=>d.createElement(_9,Object.assign({},e),d.createElement("span",null));eh._InternalPanelDoNotUseOrYouWillBeFired=V9;const KP=d.createContext(null),W9=KP.Provider,qP=d.createContext(null),U9=qP.Provider;var K9=["prefixCls","className","style","checked","disabled","defaultChecked","type","title","onChange"],XP=d.forwardRef(function(e,t){var n=e.prefixCls,r=n===void 0?"rc-checkbox":n,o=e.className,i=e.style,a=e.checked,s=e.disabled,c=e.defaultChecked,u=c===void 0?!1:c,p=e.type,v=p===void 0?"checkbox":p,h=e.title,m=e.onChange,b=Mt(e,K9),y=d.useRef(null),w=d.useRef(null),C=Dn(u,{value:a}),S=ve(C,2),E=S[0],k=S[1];d.useImperativeHandle(t,function(){return{focus:function(M){var P;(P=y.current)===null||P===void 0||P.focus(M)},blur:function(){var M;(M=y.current)===null||M===void 0||M.blur()},input:y.current,nativeElement:w.current}});var O=ie(r,o,K(K({},"".concat(r,"-checked"),E),"".concat(r,"-disabled"),s)),$=function(M){s||("checked"in e||k(M.target.checked),m==null||m({target:Z(Z({},e),{},{type:v,checked:M.target.checked}),stopPropagation:function(){M.stopPropagation()},preventDefault:function(){M.preventDefault()},nativeEvent:M.nativeEvent}))};return d.createElement("span",{className:O,title:h,style:i,ref:w},d.createElement("input",$e({},b,{className:"".concat(r,"-input"),ref:y,onChange:$,disabled:s,checked:!!E,type:v})),d.createElement("span",{className:"".concat(r,"-inner")}))});const q9=e=>{const{componentCls:t,antCls:n}=e,r=`${t}-group`;return{[r]:Object.assign(Object.assign({},jn(e)),{display:"inline-block",fontSize:0,[`&${r}-rtl`]:{direction:"rtl"},[`&${r}-block`]:{display:"flex"},[`${n}-badge ${n}-badge-count`]:{zIndex:1},[`> ${n}-badge:not(:first-child) > ${n}-button-wrapper`]:{borderInlineStart:"none"}})}},X9=e=>{const{componentCls:t,wrapperMarginInlineEnd:n,colorPrimary:r,radioSize:o,motionDurationSlow:i,motionDurationMid:a,motionEaseInOutCirc:s,colorBgContainer:c,colorBorder:u,lineWidth:p,colorBgContainerDisabled:v,colorTextDisabled:h,paddingXS:m,dotColorDisabled:b,lineType:y,radioColor:w,radioBgColor:C,calc:S}=e,E=`${t}-inner`,O=S(o).sub(S(4).mul(2)),$=S(1).mul(o).equal({unit:!0});return{[`${t}-wrapper`]:Object.assign(Object.assign({},jn(e)),{display:"inline-flex",alignItems:"baseline",marginInlineStart:0,marginInlineEnd:n,cursor:"pointer",[`&${t}-wrapper-rtl`]:{direction:"rtl"},"&-disabled":{cursor:"not-allowed",color:e.colorTextDisabled},"&::after":{display:"inline-block",width:0,overflow:"hidden",content:'"\\a0"'},"&-block":{flex:1,justifyContent:"center"},[`${t}-checked::after`]:{position:"absolute",insetBlockStart:0,insetInlineStart:0,width:"100%",height:"100%",border:`${de(p)} ${y} ${r}`,borderRadius:"50%",visibility:"hidden",opacity:0,content:'""'},[t]:Object.assign(Object.assign({},jn(e)),{position:"relative",display:"inline-block",outline:"none",cursor:"pointer",alignSelf:"center",borderRadius:"50%"}),[`${t}-wrapper:hover &, - &:hover ${E}`]:{borderColor:r},[`${t}-input:focus-visible + ${E}`]:Object.assign({},qa(e)),[`${t}:hover::after, ${t}-wrapper:hover &::after`]:{visibility:"visible"},[`${t}-inner`]:{"&::after":{boxSizing:"border-box",position:"absolute",insetBlockStart:"50%",insetInlineStart:"50%",display:"block",width:$,height:$,marginBlockStart:S(1).mul(o).div(-2).equal({unit:!0}),marginInlineStart:S(1).mul(o).div(-2).equal({unit:!0}),backgroundColor:w,borderBlockStart:0,borderInlineStart:0,borderRadius:$,transform:"scale(0)",opacity:0,transition:`all ${i} ${s}`,content:'""'},boxSizing:"border-box",position:"relative",insetBlockStart:0,insetInlineStart:0,display:"block",width:$,height:$,backgroundColor:c,borderColor:u,borderStyle:"solid",borderWidth:p,borderRadius:"50%",transition:`all ${a}`},[`${t}-input`]:{position:"absolute",inset:0,zIndex:1,cursor:"pointer",opacity:0},[`${t}-checked`]:{[E]:{borderColor:r,backgroundColor:C,"&::after":{transform:`scale(${e.calc(e.dotSize).div(o).equal()})`,opacity:1,transition:`all ${i} ${s}`}}},[`${t}-disabled`]:{cursor:"not-allowed",[E]:{backgroundColor:v,borderColor:u,cursor:"not-allowed","&::after":{backgroundColor:b}},[`${t}-input`]:{cursor:"not-allowed"},[`${t}-disabled + span`]:{color:h,cursor:"not-allowed"},[`&${t}-checked`]:{[E]:{"&::after":{transform:`scale(${S(O).div(o).equal()})`}}}},[`span${t} + *`]:{paddingInlineStart:m,paddingInlineEnd:m}})}},G9=e=>{const{buttonColor:t,controlHeight:n,componentCls:r,lineWidth:o,lineType:i,colorBorder:a,motionDurationSlow:s,motionDurationMid:c,buttonPaddingInline:u,fontSize:p,buttonBg:v,fontSizeLG:h,controlHeightLG:m,controlHeightSM:b,paddingXS:y,borderRadius:w,borderRadiusSM:C,borderRadiusLG:S,buttonCheckedBg:E,buttonSolidCheckedColor:k,colorTextDisabled:O,colorBgContainerDisabled:$,buttonCheckedBgDisabled:T,buttonCheckedColorDisabled:M,colorPrimary:P,colorPrimaryHover:R,colorPrimaryActive:A,buttonSolidCheckedBg:V,buttonSolidCheckedHoverBg:z,buttonSolidCheckedActiveBg:B,calc:_}=e;return{[`${r}-button-wrapper`]:{position:"relative",display:"inline-block",height:n,margin:0,paddingInline:u,paddingBlock:0,color:t,fontSize:p,lineHeight:de(_(n).sub(_(o).mul(2)).equal()),background:v,border:`${de(o)} ${i} ${a}`,borderBlockStartWidth:_(o).add(.02).equal(),borderInlineStartWidth:0,borderInlineEndWidth:o,cursor:"pointer",transition:[`color ${c}`,`background ${c}`,`box-shadow ${c}`].join(","),a:{color:t},[`> ${r}-button`]:{position:"absolute",insetBlockStart:0,insetInlineStart:0,zIndex:-1,width:"100%",height:"100%"},"&:not(:first-child)":{"&::before":{position:"absolute",insetBlockStart:_(o).mul(-1).equal(),insetInlineStart:_(o).mul(-1).equal(),display:"block",boxSizing:"content-box",width:1,height:"100%",paddingBlock:o,paddingInline:0,backgroundColor:a,transition:`background-color ${s}`,content:'""'}},"&:first-child":{borderInlineStart:`${de(o)} ${i} ${a}`,borderStartStartRadius:w,borderEndStartRadius:w},"&:last-child":{borderStartEndRadius:w,borderEndEndRadius:w},"&:first-child:last-child":{borderRadius:w},[`${r}-group-large &`]:{height:m,fontSize:h,lineHeight:de(_(m).sub(_(o).mul(2)).equal()),"&:first-child":{borderStartStartRadius:S,borderEndStartRadius:S},"&:last-child":{borderStartEndRadius:S,borderEndEndRadius:S}},[`${r}-group-small &`]:{height:b,paddingInline:_(y).sub(o).equal(),paddingBlock:0,lineHeight:de(_(b).sub(_(o).mul(2)).equal()),"&:first-child":{borderStartStartRadius:C,borderEndStartRadius:C},"&:last-child":{borderStartEndRadius:C,borderEndEndRadius:C}},"&:hover":{position:"relative",color:P},"&:has(:focus-visible)":Object.assign({},qa(e)),[`${r}-inner, input[type='checkbox'], input[type='radio']`]:{width:0,height:0,opacity:0,pointerEvents:"none"},[`&-checked:not(${r}-button-wrapper-disabled)`]:{zIndex:1,color:P,background:E,borderColor:P,"&::before":{backgroundColor:P},"&:first-child":{borderColor:P},"&:hover":{color:R,borderColor:R,"&::before":{backgroundColor:R}},"&:active":{color:A,borderColor:A,"&::before":{backgroundColor:A}}},[`${r}-group-solid &-checked:not(${r}-button-wrapper-disabled)`]:{color:k,background:V,borderColor:V,"&:hover":{color:k,background:z,borderColor:z},"&:active":{color:k,background:B,borderColor:B}},"&-disabled":{color:O,backgroundColor:$,borderColor:a,cursor:"not-allowed","&:first-child, &:hover":{color:O,backgroundColor:$,borderColor:a}},[`&-disabled${r}-button-wrapper-checked`]:{color:M,backgroundColor:T,borderColor:a,boxShadow:"none"},"&-block":{flex:1,textAlign:"center"}}}},Y9=e=>{const{wireframe:t,padding:n,marginXS:r,lineWidth:o,fontSizeLG:i,colorText:a,colorBgContainer:s,colorTextDisabled:c,controlItemBgActiveDisabled:u,colorTextLightSolid:p,colorPrimary:v,colorPrimaryHover:h,colorPrimaryActive:m,colorWhite:b}=e,y=4,w=i,C=t?w-y*2:w-(y+o)*2;return{radioSize:w,dotSize:C,dotColorDisabled:c,buttonSolidCheckedColor:p,buttonSolidCheckedBg:v,buttonSolidCheckedHoverBg:h,buttonSolidCheckedActiveBg:m,buttonBg:s,buttonCheckedBg:s,buttonColor:a,buttonCheckedBgDisabled:u,buttonCheckedColorDisabled:c,buttonPaddingInline:n-o,wrapperMarginInlineEnd:r,radioColor:t?v:b,radioBgColor:t?s:v}},GP=In("Radio",e=>{const{controlOutline:t,controlOutlineWidth:n}=e,r=`0 0 0 ${de(n)} ${t}`,i=vn(e,{radioFocusShadow:r,radioButtonFocusShadow:r});return[q9(i),X9(i),G9(i)]},Y9,{unitless:{radioSize:!0,dotSize:!0}});var Q9=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{var n,r;const o=d.useContext(KP),i=d.useContext(qP),{getPrefixCls:a,direction:s,radio:c}=d.useContext(ht),u=d.useRef(null),p=Wr(t,u),{isFormItemInput:v}=d.useContext(Vr),h=B=>{var _,H;(_=e.onChange)===null||_===void 0||_.call(e,B),(H=o==null?void 0:o.onChange)===null||H===void 0||H.call(o,B)},{prefixCls:m,className:b,rootClassName:y,children:w,style:C,title:S}=e,E=Q9(e,["prefixCls","className","rootClassName","children","style","title"]),k=a("radio",m),O=((o==null?void 0:o.optionType)||i)==="button",$=O?`${k}-button`:k,T=br(k),[M,P,R]=GP(k,T),A=Object.assign({},E),V=d.useContext(So);o&&(A.name=o.name,A.onChange=h,A.checked=e.value===o.value,A.disabled=(n=A.disabled)!==null&&n!==void 0?n:o.disabled),A.disabled=(r=A.disabled)!==null&&r!==void 0?r:V;const z=ie(`${$}-wrapper`,{[`${$}-wrapper-checked`]:A.checked,[`${$}-wrapper-disabled`]:A.disabled,[`${$}-wrapper-rtl`]:s==="rtl",[`${$}-wrapper-in-form-item`]:v,[`${$}-wrapper-block`]:!!(o!=null&&o.block)},c==null?void 0:c.className,b,y,P,R,T);return M(d.createElement(Lv,{component:"Radio",disabled:A.disabled},d.createElement("label",{className:z,style:Object.assign(Object.assign({},c==null?void 0:c.style),C),onMouseEnter:e.onMouseEnter,onMouseLeave:e.onMouseLeave,title:S},d.createElement(XP,Object.assign({},A,{className:ie(A.className,{[jv]:!O}),type:"radio",prefixCls:$,ref:p})),w!==void 0?d.createElement("span",null,w):null)))},rv=d.forwardRef(Z9),J9=d.forwardRef((e,t)=>{const{getPrefixCls:n,direction:r}=d.useContext(ht),{prefixCls:o,className:i,rootClassName:a,options:s,buttonStyle:c="outline",disabled:u,children:p,size:v,style:h,id:m,optionType:b,name:y,defaultValue:w,value:C,block:S=!1,onChange:E,onMouseEnter:k,onMouseLeave:O,onFocus:$,onBlur:T}=e,[M,P]=Dn(w,{value:C}),R=d.useCallback(D=>{const W=M,G=D.target.value;"value"in e||P(G),G!==W&&(E==null||E(D))},[M,P,E]),A=n("radio",o),V=`${A}-group`,z=br(A),[B,_,H]=GP(A,z);let j=p;s&&s.length>0&&(j=s.map(D=>typeof D=="string"||typeof D=="number"?d.createElement(rv,{key:D.toString(),prefixCls:A,disabled:u,value:D,checked:M===D},D):d.createElement(rv,{key:`radio-group-value-options-${D.value}`,prefixCls:A,disabled:D.disabled||u,value:D.value,checked:M===D.value,title:D.title,style:D.style,id:D.id,required:D.required},D.label)));const L=Go(v),F=ie(V,`${V}-${c}`,{[`${V}-${L}`]:L,[`${V}-rtl`]:r==="rtl",[`${V}-block`]:S},i,a,_,H,z),U=d.useMemo(()=>({onChange:R,value:M,disabled:u,name:y,optionType:b,block:S}),[R,M,u,y,b,S]);return B(d.createElement("div",Object.assign({},Gr(e,{aria:!0,data:!0}),{className:F,style:h,onMouseEnter:k,onMouseLeave:O,onFocus:$,onBlur:T,id:m,ref:t}),d.createElement(W9,{value:U},j)))}),eV=d.memo(J9);var tV=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{const{getPrefixCls:n}=d.useContext(ht),{prefixCls:r}=e,o=tV(e,["prefixCls"]),i=n("radio",r);return d.createElement(U9,{value:"button"},d.createElement(rv,Object.assign({prefixCls:i},o,{type:"radio",ref:t})))},rV=d.forwardRef(nV),Td=rv;Td.Button=rV;Td.Group=eV;Td.__ANT_RADIO=!0;function Lw(e){return vn(e,{inputAffixPadding:e.paddingXXS})}const Bw=e=>{const{controlHeight:t,fontSize:n,lineHeight:r,lineWidth:o,controlHeightSM:i,controlHeightLG:a,fontSizeLG:s,lineHeightLG:c,paddingSM:u,controlPaddingHorizontalSM:p,controlPaddingHorizontal:v,colorFillAlter:h,colorPrimaryHover:m,colorPrimary:b,controlOutlineWidth:y,controlOutline:w,colorErrorOutline:C,colorWarningOutline:S,colorBgContainer:E}=e;return{paddingBlock:Math.max(Math.round((t-n*r)/2*10)/10-o,0),paddingBlockSM:Math.max(Math.round((i-n*r)/2*10)/10-o,0),paddingBlockLG:Math.ceil((a-s*c)/2*10)/10-o,paddingInline:u-o,paddingInlineSM:p-o,paddingInlineLG:v-o,addonBg:h,activeBorderColor:b,hoverBorderColor:m,activeShadow:`0 0 0 ${y}px ${w}`,errorActiveShadow:`0 0 0 ${y}px ${C}`,warningActiveShadow:`0 0 0 ${y}px ${S}`,hoverBg:E,activeBg:E,inputFontSize:n,inputFontSizeLG:s,inputFontSizeSM:n}},oV=e=>({borderColor:e.hoverBorderColor,backgroundColor:e.hoverBg}),th=e=>({color:e.colorTextDisabled,backgroundColor:e.colorBgContainerDisabled,borderColor:e.colorBorder,boxShadow:"none",cursor:"not-allowed",opacity:1,"input[disabled], textarea[disabled]":{cursor:"not-allowed"},"&:hover:not([disabled])":Object.assign({},oV(vn(e,{hoverBorderColor:e.colorBorder,hoverBg:e.colorBgContainerDisabled})))}),Aw=(e,t)=>({background:e.colorBgContainer,borderWidth:e.lineWidth,borderStyle:e.lineType,borderColor:t.borderColor,"&:hover":{borderColor:t.hoverBorderColor,backgroundColor:e.hoverBg},"&:focus, &:focus-within":{borderColor:t.activeBorderColor,boxShadow:t.activeShadow,outline:0,backgroundColor:e.activeBg}}),Sk=(e,t)=>({[`&${e.componentCls}-status-${t.status}:not(${e.componentCls}-disabled)`]:Object.assign(Object.assign({},Aw(e,t)),{[`${e.componentCls}-prefix, ${e.componentCls}-suffix`]:{color:t.affixColor}}),[`&${e.componentCls}-status-${t.status}${e.componentCls}-disabled`]:{borderColor:t.borderColor}}),iV=(e,t)=>({"&-outlined":Object.assign(Object.assign(Object.assign(Object.assign(Object.assign({},Aw(e,{borderColor:e.colorBorder,hoverBorderColor:e.hoverBorderColor,activeBorderColor:e.activeBorderColor,activeShadow:e.activeShadow})),{[`&${e.componentCls}-disabled, &[disabled]`]:Object.assign({},th(e))}),Sk(e,{status:"error",borderColor:e.colorError,hoverBorderColor:e.colorErrorBorderHover,activeBorderColor:e.colorError,activeShadow:e.errorActiveShadow,affixColor:e.colorError})),Sk(e,{status:"warning",borderColor:e.colorWarning,hoverBorderColor:e.colorWarningBorderHover,activeBorderColor:e.colorWarning,activeShadow:e.warningActiveShadow,affixColor:e.colorWarning})),t)}),Ck=(e,t)=>({[`&${e.componentCls}-group-wrapper-status-${t.status}`]:{[`${e.componentCls}-group-addon`]:{borderColor:t.addonBorderColor,color:t.addonColor}}}),aV=e=>({"&-outlined":Object.assign(Object.assign(Object.assign({[`${e.componentCls}-group`]:{"&-addon":{background:e.addonBg,border:`${de(e.lineWidth)} ${e.lineType} ${e.colorBorder}`},"&-addon:first-child":{borderInlineEnd:0},"&-addon:last-child":{borderInlineStart:0}}},Ck(e,{status:"error",addonBorderColor:e.colorError,addonColor:e.colorErrorText})),Ck(e,{status:"warning",addonBorderColor:e.colorWarning,addonColor:e.colorWarningText})),{[`&${e.componentCls}-group-wrapper-disabled`]:{[`${e.componentCls}-group-addon`]:Object.assign({},th(e))}})}),sV=(e,t)=>{const{componentCls:n}=e;return{"&-borderless":Object.assign({background:"transparent",border:"none","&:focus, &:focus-within":{outline:"none"},[`&${n}-disabled, &[disabled]`]:{color:e.colorTextDisabled,cursor:"not-allowed"},[`&${n}-status-error`]:{"&, & input, & textarea":{color:e.colorError}},[`&${n}-status-warning`]:{"&, & input, & textarea":{color:e.colorWarning}}},t)}},YP=(e,t)=>({background:t.bg,borderWidth:e.lineWidth,borderStyle:e.lineType,borderColor:"transparent","input&, & input, textarea&, & textarea":{color:t==null?void 0:t.inputColor},"&:hover":{background:t.hoverBg},"&:focus, &:focus-within":{outline:0,borderColor:t.activeBorderColor,backgroundColor:e.activeBg}}),Ek=(e,t)=>({[`&${e.componentCls}-status-${t.status}:not(${e.componentCls}-disabled)`]:Object.assign(Object.assign({},YP(e,t)),{[`${e.componentCls}-prefix, ${e.componentCls}-suffix`]:{color:t.affixColor}})}),lV=(e,t)=>({"&-filled":Object.assign(Object.assign(Object.assign(Object.assign(Object.assign({},YP(e,{bg:e.colorFillTertiary,hoverBg:e.colorFillSecondary,activeBorderColor:e.activeBorderColor})),{[`&${e.componentCls}-disabled, &[disabled]`]:Object.assign({},th(e))}),Ek(e,{status:"error",bg:e.colorErrorBg,hoverBg:e.colorErrorBgHover,activeBorderColor:e.colorError,inputColor:e.colorErrorText,affixColor:e.colorError})),Ek(e,{status:"warning",bg:e.colorWarningBg,hoverBg:e.colorWarningBgHover,activeBorderColor:e.colorWarning,inputColor:e.colorWarningText,affixColor:e.colorWarning})),t)}),kk=(e,t)=>({[`&${e.componentCls}-group-wrapper-status-${t.status}`]:{[`${e.componentCls}-group-addon`]:{background:t.addonBg,color:t.addonColor}}}),cV=e=>({"&-filled":Object.assign(Object.assign(Object.assign({[`${e.componentCls}-group`]:{"&-addon":{background:e.colorFillTertiary},[`${e.componentCls}-filled:not(:focus):not(:focus-within)`]:{"&:not(:first-child)":{borderInlineStart:`${de(e.lineWidth)} ${e.lineType} ${e.colorSplit}`},"&:not(:last-child)":{borderInlineEnd:`${de(e.lineWidth)} ${e.lineType} ${e.colorSplit}`}}}},kk(e,{status:"error",addonBg:e.colorErrorBg,addonColor:e.colorErrorText})),kk(e,{status:"warning",addonBg:e.colorWarningBg,addonColor:e.colorWarningText})),{[`&${e.componentCls}-group-wrapper-disabled`]:{[`${e.componentCls}-group`]:{"&-addon":{background:e.colorFillTertiary,color:e.colorTextDisabled},"&-addon:first-child":{borderInlineStart:`${de(e.lineWidth)} ${e.lineType} ${e.colorBorder}`,borderTop:`${de(e.lineWidth)} ${e.lineType} ${e.colorBorder}`,borderBottom:`${de(e.lineWidth)} ${e.lineType} ${e.colorBorder}`},"&-addon:last-child":{borderInlineEnd:`${de(e.lineWidth)} ${e.lineType} ${e.colorBorder}`,borderTop:`${de(e.lineWidth)} ${e.lineType} ${e.colorBorder}`,borderBottom:`${de(e.lineWidth)} ${e.lineType} ${e.colorBorder}`}}}})}),uV=e=>({"&::-moz-placeholder":{opacity:1},"&::placeholder":{color:e,userSelect:"none"},"&:placeholder-shown":{textOverflow:"ellipsis"}}),QP=e=>{const{paddingBlockLG:t,lineHeightLG:n,borderRadiusLG:r,paddingInlineLG:o}=e;return{padding:`${de(t)} ${de(o)}`,fontSize:e.inputFontSizeLG,lineHeight:n,borderRadius:r}},zw=e=>({padding:`${de(e.paddingBlockSM)} ${de(e.paddingInlineSM)}`,fontSize:e.inputFontSizeSM,borderRadius:e.borderRadiusSM}),Hw=e=>Object.assign(Object.assign({position:"relative",display:"inline-block",width:"100%",minWidth:0,padding:`${de(e.paddingBlock)} ${de(e.paddingInline)}`,color:e.colorText,fontSize:e.inputFontSize,lineHeight:e.lineHeight,borderRadius:e.borderRadius,transition:`all ${e.motionDurationMid}`},uV(e.colorTextPlaceholder)),{"textarea&":{maxWidth:"100%",height:"auto",minHeight:e.controlHeight,lineHeight:e.lineHeight,verticalAlign:"bottom",transition:`all ${e.motionDurationSlow}, height 0s`,resize:"vertical"},"&-lg":Object.assign({},QP(e)),"&-sm":Object.assign({},zw(e)),"&-rtl, &-textarea-rtl":{direction:"rtl"}}),dV=e=>{const{componentCls:t,antCls:n}=e;return{position:"relative",display:"table",width:"100%",borderCollapse:"separate",borderSpacing:0,"&[class*='col-']":{paddingInlineEnd:e.paddingXS,"&:last-child":{paddingInlineEnd:0}},[`&-lg ${t}, &-lg > ${t}-group-addon`]:Object.assign({},QP(e)),[`&-sm ${t}, &-sm > ${t}-group-addon`]:Object.assign({},zw(e)),[`&-lg ${n}-select-single ${n}-select-selector`]:{height:e.controlHeightLG},[`&-sm ${n}-select-single ${n}-select-selector`]:{height:e.controlHeightSM},[`> ${t}`]:{display:"table-cell","&:not(:first-child):not(:last-child)":{borderRadius:0}},[`${t}-group`]:{"&-addon, &-wrap":{display:"table-cell",width:1,whiteSpace:"nowrap",verticalAlign:"middle","&:not(:first-child):not(:last-child)":{borderRadius:0}},"&-wrap > *":{display:"block !important"},"&-addon":{position:"relative",padding:`0 ${de(e.paddingInline)}`,color:e.colorText,fontWeight:"normal",fontSize:e.inputFontSize,textAlign:"center",borderRadius:e.borderRadius,transition:`all ${e.motionDurationSlow}`,lineHeight:1,[`${n}-select`]:{margin:`${de(e.calc(e.paddingBlock).add(1).mul(-1).equal())} ${de(e.calc(e.paddingInline).mul(-1).equal())}`,[`&${n}-select-single:not(${n}-select-customize-input):not(${n}-pagination-size-changer)`]:{[`${n}-select-selector`]:{backgroundColor:"inherit",border:`${de(e.lineWidth)} ${e.lineType} transparent`,boxShadow:"none"}}},[`${n}-cascader-picker`]:{margin:`-9px ${de(e.calc(e.paddingInline).mul(-1).equal())}`,backgroundColor:"transparent",[`${n}-cascader-input`]:{textAlign:"start",border:0,boxShadow:"none"}}}},[t]:{width:"100%",marginBottom:0,textAlign:"inherit","&:focus":{zIndex:1,borderInlineEndWidth:1},"&:hover":{zIndex:1,borderInlineEndWidth:1,[`${t}-search-with-button &`]:{zIndex:0}}},[`> ${t}:first-child, ${t}-group-addon:first-child`]:{borderStartEndRadius:0,borderEndEndRadius:0,[`${n}-select ${n}-select-selector`]:{borderStartEndRadius:0,borderEndEndRadius:0}},[`> ${t}-affix-wrapper`]:{[`&:not(:first-child) ${t}`]:{borderStartStartRadius:0,borderEndStartRadius:0},[`&:not(:last-child) ${t}`]:{borderStartEndRadius:0,borderEndEndRadius:0}},[`> ${t}:last-child, ${t}-group-addon:last-child`]:{borderStartStartRadius:0,borderEndStartRadius:0,[`${n}-select ${n}-select-selector`]:{borderStartStartRadius:0,borderEndStartRadius:0}},[`${t}-affix-wrapper`]:{"&:not(:last-child)":{borderStartEndRadius:0,borderEndEndRadius:0,[`${t}-search &`]:{borderStartStartRadius:e.borderRadius,borderEndStartRadius:e.borderRadius}},[`&:not(:first-child), ${t}-search &:not(:first-child)`]:{borderStartStartRadius:0,borderEndStartRadius:0}},[`&${t}-group-compact`]:Object.assign(Object.assign({display:"block"},Ps()),{[`${t}-group-addon, ${t}-group-wrap, > ${t}`]:{"&:not(:first-child):not(:last-child)":{borderInlineEndWidth:e.lineWidth,"&:hover, &:focus":{zIndex:1}}},"& > *":{display:"inline-flex",float:"none",verticalAlign:"top",borderRadius:0},[` - & > ${t}-affix-wrapper, - & > ${t}-number-affix-wrapper, - & > ${n}-picker-range - `]:{display:"inline-flex"},"& > *:not(:last-child)":{marginInlineEnd:e.calc(e.lineWidth).mul(-1).equal(),borderInlineEndWidth:e.lineWidth},[t]:{float:"none"},[`& > ${n}-select > ${n}-select-selector, - & > ${n}-select-auto-complete ${t}, - & > ${n}-cascader-picker ${t}, - & > ${t}-group-wrapper ${t}`]:{borderInlineEndWidth:e.lineWidth,borderRadius:0,"&:hover, &:focus":{zIndex:1}},[`& > ${n}-select-focused`]:{zIndex:1},[`& > ${n}-select > ${n}-select-arrow`]:{zIndex:1},[`& > *:first-child, - & > ${n}-select:first-child > ${n}-select-selector, - & > ${n}-select-auto-complete:first-child ${t}, - & > ${n}-cascader-picker:first-child ${t}`]:{borderStartStartRadius:e.borderRadius,borderEndStartRadius:e.borderRadius},[`& > *:last-child, - & > ${n}-select:last-child > ${n}-select-selector, - & > ${n}-cascader-picker:last-child ${t}, - & > ${n}-cascader-picker-focused:last-child ${t}`]:{borderInlineEndWidth:e.lineWidth,borderStartEndRadius:e.borderRadius,borderEndEndRadius:e.borderRadius},[`& > ${n}-select-auto-complete ${t}`]:{verticalAlign:"top"},[`${t}-group-wrapper + ${t}-group-wrapper`]:{marginInlineStart:e.calc(e.lineWidth).mul(-1).equal(),[`${t}-affix-wrapper`]:{borderRadius:0}},[`${t}-group-wrapper:not(:last-child)`]:{[`&${t}-search > ${t}-group`]:{[`& > ${t}-group-addon > ${t}-search-button`]:{borderRadius:0},[`& > ${t}`]:{borderStartStartRadius:e.borderRadius,borderStartEndRadius:0,borderEndEndRadius:0,borderEndStartRadius:e.borderRadius}}}})}},fV=e=>{const{componentCls:t,controlHeightSM:n,lineWidth:r,calc:o}=e,a=o(n).sub(o(r).mul(2)).sub(16).div(2).equal();return{[t]:Object.assign(Object.assign(Object.assign(Object.assign(Object.assign(Object.assign({},jn(e)),Hw(e)),iV(e)),lV(e)),sV(e)),{'&[type="color"]':{height:e.controlHeight,[`&${t}-lg`]:{height:e.controlHeightLG},[`&${t}-sm`]:{height:n,paddingTop:a,paddingBottom:a}},'&[type="search"]::-webkit-search-cancel-button, &[type="search"]::-webkit-search-decoration':{"-webkit-appearance":"none"}})}},pV=e=>{const{componentCls:t}=e;return{[`${t}-clear-icon`]:{margin:0,color:e.colorTextQuaternary,fontSize:e.fontSizeIcon,verticalAlign:-1,cursor:"pointer",transition:`color ${e.motionDurationSlow}`,"&:hover":{color:e.colorTextTertiary},"&:active":{color:e.colorText},"&-hidden":{visibility:"hidden"},"&-has-suffix":{margin:`0 ${de(e.inputAffixPadding)}`}}}},vV=e=>{const{componentCls:t,inputAffixPadding:n,colorTextDescription:r,motionDurationSlow:o,colorIcon:i,colorIconHover:a,iconCls:s}=e,c=`${t}-affix-wrapper`,u=`${t}-affix-wrapper-disabled`;return{[c]:Object.assign(Object.assign(Object.assign(Object.assign({},Hw(e)),{display:"inline-flex",[`&:not(${t}-disabled):hover`]:{zIndex:1,[`${t}-search-with-button &`]:{zIndex:0}},"&-focused, &:focus":{zIndex:1},[`> input${t}`]:{padding:0},[`> input${t}, > textarea${t}`]:{fontSize:"inherit",border:"none",borderRadius:0,outline:"none",background:"transparent",color:"inherit","&::-ms-reveal":{display:"none"},"&:focus":{boxShadow:"none !important"}},"&::before":{display:"inline-block",width:0,visibility:"hidden",content:'"\\a0"'},[t]:{"&-prefix, &-suffix":{display:"flex",flex:"none",alignItems:"center","> *:not(:last-child)":{marginInlineEnd:e.paddingXS}},"&-show-count-suffix":{color:r},"&-show-count-has-suffix":{marginInlineEnd:e.paddingXXS},"&-prefix":{marginInlineEnd:n},"&-suffix":{marginInlineStart:n}}}),pV(e)),{[`${s}${t}-password-icon`]:{color:i,cursor:"pointer",transition:`all ${o}`,"&:hover":{color:a}}}),[u]:{[`${s}${t}-password-icon`]:{color:i,cursor:"not-allowed","&:hover":{color:i}}}}},hV=e=>{const{componentCls:t,borderRadiusLG:n,borderRadiusSM:r}=e;return{[`${t}-group`]:Object.assign(Object.assign(Object.assign({},jn(e)),dV(e)),{"&-rtl":{direction:"rtl"},"&-wrapper":Object.assign(Object.assign(Object.assign({display:"inline-block",width:"100%",textAlign:"start",verticalAlign:"top","&-rtl":{direction:"rtl"},"&-lg":{[`${t}-group-addon`]:{borderRadius:n,fontSize:e.inputFontSizeLG}},"&-sm":{[`${t}-group-addon`]:{borderRadius:r}}},aV(e)),cV(e)),{[`&:not(${t}-compact-first-item):not(${t}-compact-last-item)${t}-compact-item`]:{[`${t}, ${t}-group-addon`]:{borderRadius:0}},[`&:not(${t}-compact-last-item)${t}-compact-first-item`]:{[`${t}, ${t}-group-addon`]:{borderStartEndRadius:0,borderEndEndRadius:0}},[`&:not(${t}-compact-first-item)${t}-compact-last-item`]:{[`${t}, ${t}-group-addon`]:{borderStartStartRadius:0,borderEndStartRadius:0}},[`&:not(${t}-compact-last-item)${t}-compact-item`]:{[`${t}-affix-wrapper`]:{borderStartEndRadius:0,borderEndEndRadius:0}}})})}},gV=e=>{const{componentCls:t,antCls:n}=e,r=`${t}-search`;return{[r]:{[t]:{"&:hover, &:focus":{[`+ ${t}-group-addon ${r}-button:not(${n}-btn-primary)`]:{borderInlineStartColor:e.colorPrimaryHover}}},[`${t}-affix-wrapper`]:{height:e.controlHeight,borderRadius:0},[`${t}-lg`]:{lineHeight:e.calc(e.lineHeightLG).sub(2e-4).equal()},[`> ${t}-group`]:{[`> ${t}-group-addon:last-child`]:{insetInlineStart:-1,padding:0,border:0,[`${r}-button`]:{marginInlineEnd:-1,paddingTop:0,paddingBottom:0,borderStartStartRadius:0,borderEndStartRadius:0,boxShadow:"none"},[`${r}-button:not(${n}-btn-primary)`]:{color:e.colorTextDescription,"&:hover":{color:e.colorPrimaryHover},"&:active":{color:e.colorPrimaryActive},[`&${n}-btn-loading::before`]:{insetInlineStart:0,insetInlineEnd:0,insetBlockStart:0,insetBlockEnd:0}}}},[`${r}-button`]:{height:e.controlHeight,"&:hover, &:focus":{zIndex:1}},"&-large":{[`${t}-affix-wrapper, ${r}-button`]:{height:e.controlHeightLG}},"&-small":{[`${t}-affix-wrapper, ${r}-button`]:{height:e.controlHeightSM}},"&-rtl":{direction:"rtl"},[`&${t}-compact-item`]:{[`&:not(${t}-compact-last-item)`]:{[`${t}-group-addon`]:{[`${t}-search-button`]:{marginInlineEnd:e.calc(e.lineWidth).mul(-1).equal(),borderRadius:0}}},[`&:not(${t}-compact-first-item)`]:{[`${t},${t}-affix-wrapper`]:{borderRadius:0}},[`> ${t}-group-addon ${t}-search-button, - > ${t}, - ${t}-affix-wrapper`]:{"&:hover, &:focus, &:active":{zIndex:2}},[`> ${t}-affix-wrapper-focused`]:{zIndex:2}}}}},mV=e=>{const{componentCls:t,paddingLG:n}=e,r=`${t}-textarea`;return{[r]:{position:"relative","&-show-count":{[`> ${t}`]:{height:"100%"},[`${t}-data-count`]:{position:"absolute",bottom:e.calc(e.fontSize).mul(e.lineHeight).mul(-1).equal(),insetInlineEnd:0,color:e.colorTextDescription,whiteSpace:"nowrap",pointerEvents:"none"}},[` - &-allow-clear > ${t}, - &-affix-wrapper${r}-has-feedback ${t} - `]:{paddingInlineEnd:n},[`&-affix-wrapper${t}-affix-wrapper`]:{padding:0,[`> textarea${t}`]:{fontSize:"inherit",border:"none",outline:"none",background:"transparent","&:focus":{boxShadow:"none !important"}},[`${t}-suffix`]:{margin:0,"> *:not(:last-child)":{marginInline:0},[`${t}-clear-icon`]:{position:"absolute",insetInlineEnd:e.paddingInline,insetBlockStart:e.paddingXS},[`${r}-suffix`]:{position:"absolute",top:0,insetInlineEnd:e.paddingInline,bottom:0,zIndex:1,display:"inline-flex",alignItems:"center",margin:"auto",pointerEvents:"none"}}},[`&-affix-wrapper${t}-affix-wrapper-sm`]:{[`${t}-suffix`]:{[`${t}-clear-icon`]:{insetInlineEnd:e.paddingInlineSM}}}}}},bV=e=>{const{componentCls:t}=e;return{[`${t}-out-of-range`]:{[`&, & input, & textarea, ${t}-show-count-suffix, ${t}-data-count`]:{color:e.colorError}}}},Fw=In("Input",e=>{const t=vn(e,Lw(e));return[fV(t),mV(t),vV(t),hV(t),gV(t),bV(t),_v(t)]},Bw,{resetFont:!1});function yV(e,t,n){var r=n||{},o=r.noTrailing,i=o===void 0?!1:o,a=r.noLeading,s=a===void 0?!1:a,c=r.debounceMode,u=c===void 0?void 0:c,p,v=!1,h=0;function m(){p&&clearTimeout(p)}function b(w){var C=w||{},S=C.upcomingOnly,E=S===void 0?!1:S;m(),v=!E}function y(){for(var w=arguments.length,C=new Array(w),S=0;Se?s?(h=Date.now(),i||(p=setTimeout(u?$:O,e))):O():i!==!0&&(p=setTimeout(u?$:O,u===void 0?e-k:e))}return y.cancel=b,y}function wV(e,t,n){var r={},o=r.atBegin,i=o===void 0?!1:o;return yV(e,t,{debounceMode:i!==!1})}var vc=d.createContext({}),Nl="__rc_cascader_search_mark__",xV=function(t,n,r){var o=r.label,i=o===void 0?"":o;return n.some(function(a){return String(a[i]).toLowerCase().includes(t.toLowerCase())})},SV=function(t,n,r,o){return n.map(function(i){return i[o.label]}).join(" / ")},CV=function(t,n,r,o,i,a){var s=i.filter,c=s===void 0?xV:s,u=i.render,p=u===void 0?SV:u,v=i.limit,h=v===void 0?50:v,m=i.sort;return d.useMemo(function(){var b=[];if(!t)return[];function y(w,C){var S=arguments.length>2&&arguments[2]!==void 0?arguments[2]:!1;w.forEach(function(E){if(!(!m&&h!==!1&&h>0&&b.length>=h)){var k=[].concat(Se(C),[E]),O=E[r.children],$=S||E.disabled;if((!O||O.length===0||a)&&c(t,k,{label:r.label})){var T;b.push(Z(Z({},E),{},(T={disabled:$},K(T,r.label,p(t,k,o,r)),K(T,Nl,k),K(T,r.children,void 0),T)))}O&&y(E[r.children],k,$)}})}return y(n,[]),m&&b.sort(function(w,C){return m(w[Nl],C[Nl],t,r)}),h!==!1&&h>0?b.slice(0,h):b},[t,n,r,o,p,a,c,m,h])},_w="__RC_CASCADER_SPLIT__",ZP="SHOW_PARENT",JP="SHOW_CHILD";function fi(e){return e.join(_w)}function Zl(e){return e.map(fi)}function EV(e){return e.split(_w)}function eM(e){var t=e||{},n=t.label,r=t.value,o=t.children,i=r||"value";return{label:n||"label",value:i,key:i,children:o||"children"}}function pu(e,t){var n,r;return(n=e.isLeaf)!==null&&n!==void 0?n:!((r=e[t.children])!==null&&r!==void 0&&r.length)}function kV(e){var t=e.parentElement;if(t){var n=e.offsetTop-t.offsetTop;n-t.scrollTop<0?t.scrollTo({top:n}):n+e.offsetHeight-t.scrollTop>t.offsetHeight&&t.scrollTo({top:n+e.offsetHeight-t.offsetHeight})}}function tM(e,t){return e.map(function(n){var r;return(r=n[Nl])===null||r===void 0?void 0:r.map(function(o){return o[t.value]})})}function OV(e){return Array.isArray(e)&&Array.isArray(e[0])}function ov(e){return e?OV(e)?e:(e.length===0?[]:[e]).map(function(t){return Array.isArray(t)?t:[t]}):[]}function nM(e,t,n){var r=new Set(e),o=t();return e.filter(function(i){var a=o[i],s=a?a.parent:null,c=a?a.children:null;return a&&a.node.disabled?!0:n===JP?!(c&&c.some(function(u){return u.key&&r.has(u.key)})):!(s&&!s.node.disabled&&r.has(s.key))})}function Jl(e,t,n){for(var r=arguments.length>3&&arguments[3]!==void 0?arguments[3]:!1,o=t,i=[],a=function(){var u,p,v,h=e[s],m=(u=o)===null||u===void 0?void 0:u.findIndex(function(y){var w=y[n.value];return r?String(w)===String(h):w===h}),b=m!==-1?(p=o)===null||p===void 0?void 0:p[m]:null;i.push({value:(v=b==null?void 0:b[n.value])!==null&&v!==void 0?v:h,index:m,option:b}),o=b==null?void 0:b[n.children]},s=0;s1&&arguments[1]!==void 0?arguments[1]:null;return p.map(function(h,m){for(var b=oM(v?v.pos:"0",m),y=Pd(h[i],b),w,C=0;C1&&arguments[1]!==void 0?arguments[1]:{},n=t.initWrapper,r=t.processEntity,o=t.onProcessFinished,i=t.externalGetKey,a=t.childrenPropName,s=t.fieldNames,c=arguments.length>2?arguments[2]:void 0,u=i||c,p={},v={},h={posEntities:p,keyEntities:v};return n&&(h=n(h)||h),PV(e,function(m){var b=m.node,y=m.index,w=m.pos,C=m.key,S=m.parentPos,E=m.level,k=m.nodes,O={node:b,nodes:k,index:y,key:C,pos:w,level:E},$=Pd(C,w);p[w]=O,v[$]=O,O.parent=p[S],O.parent&&(O.parent.children=O.parent.children||[],O.parent.children.push(O)),r&&r(O,h)},{externalGetKey:u,childrenPropName:a,fieldNames:s}),o&&o(h),h}function ku(e,t){var n=t.expandedKeys,r=t.selectedKeys,o=t.loadedKeys,i=t.loadingKeys,a=t.checkedKeys,s=t.halfCheckedKeys,c=t.dragOverNodeKey,u=t.dropPosition,p=t.keyEntities,v=io(p,e),h={eventKey:e,expanded:n.indexOf(e)!==-1,selected:r.indexOf(e)!==-1,loaded:o.indexOf(e)!==-1,loading:i.indexOf(e)!==-1,checked:a.indexOf(e)!==-1,halfChecked:s.indexOf(e)!==-1,pos:String(v?v.pos:""),dragOver:c===e&&u===0,dragOverGapTop:c===e&&u===-1,dragOverGapBottom:c===e&&u===1};return h}function cr(e){var t=e.data,n=e.expanded,r=e.selected,o=e.checked,i=e.loaded,a=e.loading,s=e.halfChecked,c=e.dragOver,u=e.dragOverGapTop,p=e.dragOverGapBottom,v=e.pos,h=e.active,m=e.eventKey,b=Z(Z({},t),{},{expanded:n,selected:r,checked:o,loaded:i,loading:a,halfChecked:s,dragOver:c,dragOverGapTop:u,dragOverGapBottom:p,pos:v,active:h,key:m});return"props"in b||Object.defineProperty(b,"props",{get:function(){return Fn(!1,"Second param return from event is node data instead of TreeNode instance. Please read value directly instead of reading from `props`."),e}}),b}const MV=function(e,t){var n=d.useRef({options:[],info:{keyEntities:{},pathKeyEntities:{}}}),r=d.useCallback(function(){return n.current.options!==e&&(n.current.options=e,n.current.info=nh(e,{fieldNames:t,initWrapper:function(i){return Z(Z({},i),{},{pathKeyEntities:{}})},processEntity:function(i,a){var s=i.nodes.map(function(c){return c[t.value]}).join(_w);a.pathKeyEntities[s]=i,i.key=s}})),n.current.info.pathKeyEntities},[t,e]);return r};function aM(e,t){var n=d.useMemo(function(){return t||[]},[t]),r=MV(n,e),o=d.useCallback(function(i){var a=r();return i.map(function(s){var c=a[s].nodes;return c.map(function(u){return u[e.value]})})},[r,e]);return[n,r,o]}function NV(e){return d.useMemo(function(){if(!e)return[!1,{}];var t={matchInputWidth:!0,limit:50};return e&&st(e)==="object"&&(t=Z(Z({},t),e)),t.limit<=0&&(t.limit=!1),[!0,t]},[e])}function sM(e,t){var n=new Set;return e.forEach(function(r){t.has(r)||n.add(r)}),n}function RV(e){var t=e||{},n=t.disabled,r=t.disableCheckbox,o=t.checkable;return!!(n||r)||o===!1}function DV(e,t,n,r){for(var o=new Set(e),i=new Set,a=0;a<=n;a+=1){var s=t.get(a)||new Set;s.forEach(function(v){var h=v.key,m=v.node,b=v.children,y=b===void 0?[]:b;o.has(h)&&!r(m)&&y.filter(function(w){return!r(w.node)}).forEach(function(w){o.add(w.key)})})}for(var c=new Set,u=n;u>=0;u-=1){var p=t.get(u)||new Set;p.forEach(function(v){var h=v.parent,m=v.node;if(!(r(m)||!v.parent||c.has(v.parent.key))){if(r(v.parent.node)){c.add(h.key);return}var b=!0,y=!1;(h.children||[]).filter(function(w){return!r(w.node)}).forEach(function(w){var C=w.key,S=o.has(C);b&&!S&&(b=!1),!y&&(S||i.has(C))&&(y=!0)}),b&&o.add(h.key),y&&i.add(h.key),c.add(h.key)}})}return{checkedKeys:Array.from(o),halfCheckedKeys:Array.from(sM(i,o))}}function jV(e,t,n,r,o){for(var i=new Set(e),a=new Set(t),s=0;s<=r;s+=1){var c=n.get(s)||new Set;c.forEach(function(h){var m=h.key,b=h.node,y=h.children,w=y===void 0?[]:y;!i.has(m)&&!a.has(m)&&!o(b)&&w.filter(function(C){return!o(C.node)}).forEach(function(C){i.delete(C.key)})})}a=new Set;for(var u=new Set,p=r;p>=0;p-=1){var v=n.get(p)||new Set;v.forEach(function(h){var m=h.parent,b=h.node;if(!(o(b)||!h.parent||u.has(h.parent.key))){if(o(h.parent.node)){u.add(m.key);return}var y=!0,w=!1;(m.children||[]).filter(function(C){return!o(C.node)}).forEach(function(C){var S=C.key,E=i.has(S);y&&!E&&(y=!1),!w&&(E||a.has(S))&&(w=!0)}),y||i.delete(m.key),w&&a.add(m.key),u.add(m.key)}})}return{checkedKeys:Array.from(i),halfCheckedKeys:Array.from(sM(a,i))}}function ta(e,t,n,r){var o=[],i;r?i=r:i=RV;var a=new Set(e.filter(function(p){var v=!!io(n,p);return v||o.push(p),v})),s=new Map,c=0;Object.keys(n).forEach(function(p){var v=n[p],h=v.level,m=s.get(h);m||(m=new Set,s.set(h,m)),m.add(v),c=Math.max(c,h)}),Fn(!o.length,"Tree missing follow keys: ".concat(o.slice(0,100).map(function(p){return"'".concat(p,"'")}).join(", ")));var u;return t===!0?u=DV(a,s,c,i):u=jV(a,t.halfCheckedKeys,s,c,i),u}function lM(e,t,n,r,o,i,a,s){return function(c){if(!e)t(c);else{var u=fi(c),p=Zl(n),v=Zl(r),h=p.includes(u),m=o.some(function($){return fi($)===u}),b=n,y=o;if(m&&!h)y=o.filter(function($){return fi($)!==u});else{var w=h?p.filter(function($){return $!==u}):[].concat(Se(p),[u]),C=i(),S;if(h){var E=ta(w,{checked:!1,halfCheckedKeys:v},C);S=E.checkedKeys}else{var k=ta(w,!0,C);S=k.checkedKeys}var O=nM(S,i,s);b=a(O)}t([].concat(Se(y),Se(b)))}}}function cM(e,t,n,r,o){return d.useMemo(function(){var i=o(t),a=ve(i,2),s=a[0],c=a[1];if(!e||!t.length)return[s,[],c];var u=Zl(s),p=n(),v=ta(u,!0,p),h=v.checkedKeys,m=v.halfCheckedKeys;return[r(h),r(m),c]},[e,t,n,r,o])}var LV=d.memo(function(e){var t=e.children;return t},function(e,t){return!t.open});function BV(e){var t,n=e.prefixCls,r=e.checked,o=e.halfChecked,i=e.disabled,a=e.onClick,s=e.disableCheckbox,c=d.useContext(vc),u=c.checkable,p=typeof u!="boolean"?u:null;return d.createElement("span",{className:ie("".concat(n),(t={},K(t,"".concat(n,"-checked"),r),K(t,"".concat(n,"-indeterminate"),!r&&o),K(t,"".concat(n,"-disabled"),i||s),t)),onClick:a},p)}var uM="__cascader_fix_label__";function AV(e){var t=e.prefixCls,n=e.multiple,r=e.options,o=e.activeValue,i=e.prevValuePath,a=e.onToggleOpen,s=e.onSelect,c=e.onActive,u=e.checkedSet,p=e.halfCheckedSet,v=e.loadingKeys,h=e.isSelectable,m="".concat(t,"-menu"),b="".concat(t,"-menu-item"),y=d.useContext(vc),w=y.fieldNames,C=y.changeOnSelect,S=y.expandTrigger,E=y.expandIcon,k=y.loadingIcon,O=y.dropdownMenuColumnStyle,$=y.optionRender,T=S==="hover",M=d.useMemo(function(){return r.map(function(P){var R,A=P.disabled,V=P.disableCheckbox,z=P[Nl],B=(R=P[uM])!==null&&R!==void 0?R:P[w.label],_=P[w.value],H=pu(P,w),j=z?z.map(function(W){return W[w.value]}):[].concat(Se(i),[_]),L=fi(j),F=v.includes(L),U=u.has(L),D=p.has(L);return{disabled:A,label:B,value:_,isLeaf:H,isLoading:F,checked:U,halfChecked:D,option:P,disableCheckbox:V,fullPath:j,fullPathKey:L}})},[r,u,w,p,v,i]);return d.createElement("ul",{className:m,role:"menu"},M.map(function(P){var R,A=P.disabled,V=P.label,z=P.value,B=P.isLeaf,_=P.isLoading,H=P.checked,j=P.halfChecked,L=P.option,F=P.fullPath,U=P.fullPathKey,D=P.disableCheckbox,W=function(){if(!A){var Y=Se(F);T&&B&&Y.pop(),c(Y)}},G=function(){h(L)&&s(F,B)},q;return typeof L.title=="string"?q=L.title:typeof V=="string"&&(q=V),d.createElement("li",{key:U,className:ie(b,(R={},K(R,"".concat(b,"-expand"),!B),K(R,"".concat(b,"-active"),o===z||o===U),K(R,"".concat(b,"-disabled"),A),K(R,"".concat(b,"-loading"),_),R)),style:O,role:"menuitemcheckbox",title:q,"aria-checked":H,"data-path-key":U,onClick:function(){W(),!D&&(!n||B)&&G()},onDoubleClick:function(){C&&a(!1)},onMouseEnter:function(){T&&W()},onMouseDown:function(Y){Y.preventDefault()}},n&&d.createElement(BV,{prefixCls:"".concat(t,"-checkbox"),checked:H,halfChecked:j,disabled:A||D,disableCheckbox:D,onClick:function(Y){D||(Y.stopPropagation(),G())}}),d.createElement("div",{className:"".concat(b,"-content")},$?$(L):V),!_&&E&&!B&&d.createElement("div",{className:"".concat(b,"-expand-icon")},E),_&&k&&d.createElement("div",{className:"".concat(b,"-loading-icon")},k))}))}var zV=function(t,n){var r=d.useContext(vc),o=r.values,i=o[0],a=d.useState([]),s=ve(a,2),c=s[0],u=s[1];return d.useEffect(function(){t||u(i||[])},[n,i]),[c,u]};const HV=function(e,t,n,r,o,i,a){var s=a.direction,c=a.searchValue,u=a.toggleOpen,p=a.open,v=s==="rtl",h=d.useMemo(function(){for(var $=-1,T=t,M=[],P=[],R=r.length,A=tM(t,n),V=function(j){var L=T.findIndex(function(F,U){return(A[U]?fi(A[U]):F[n.value])===r[j]});if(L===-1)return 1;$=L,M.push($),P.push(r[j]),T=T[$][n.children]},z=0;z1){var T=b.slice(0,-1);S(T)}else u(!1)},O=function(){var T,M=((T=w[y])===null||T===void 0?void 0:T[n.children])||[],P=M.find(function(A){return!A.disabled});if(P){var R=[].concat(Se(b),[P[n.value]]);S(R)}};d.useImperativeHandle(e,function(){return{onKeyDown:function(T){var M=T.which;switch(M){case De.UP:case De.DOWN:var P=0;M===De.UP?P=-1:M===De.DOWN&&(P=1),P!==0&&E(P);break;case De.LEFT:if(c)break;v?O():k();break;case De.RIGHT:if(c)break;v?k():O();break;case De.BACKSPACE:c||k();break;case De.ENTER:if(b.length){var R=w[y],A=(R==null?void 0:R[Nl])||[];A.length?i(A.map(function(V){return V[n.value]}),A[A.length-1]):i(b,w[y])}break;case De.ESC:u(!1),p&&T.stopPropagation()}},onKeyUp:function(){}}})};var dM=d.forwardRef(function(e,t){var n,r,o,i=e.prefixCls,a=e.multiple,s=e.searchValue,c=e.toggleOpen,u=e.notFoundContent,p=e.direction,v=e.open,h=d.useRef(null),m=p==="rtl",b=d.useContext(vc),y=b.options,w=b.values,C=b.halfValues,S=b.fieldNames,E=b.changeOnSelect,k=b.onSelect,O=b.searchOptions,$=b.dropdownPrefixCls,T=b.loadData,M=b.expandTrigger,P=$||i,R=d.useState([]),A=ve(R,2),V=A[0],z=A[1],B=function(ee){if(!(!T||s)){var re=Jl(ee,y,S),le=re.map(function(ge){var Re=ge.option;return Re}),pe=le[le.length-1];if(pe&&!pu(pe,S)){var Oe=fi(ee);z(function(ge){return[].concat(Se(ge),[Oe])}),T(le)}}};d.useEffect(function(){V.length&&V.forEach(function(ae){var ee=EV(ae),re=Jl(ee,y,S,!0).map(function(pe){var Oe=pe.option;return Oe}),le=re[re.length-1];(!le||le[S.children]||pu(le,S))&&z(function(pe){return pe.filter(function(Oe){return Oe!==ae})})})},[y,V,S]);var _=d.useMemo(function(){return new Set(Zl(w))},[w]),H=d.useMemo(function(){return new Set(Zl(C))},[C]),j=zV(a,v),L=ve(j,2),F=L[0],U=L[1],D=function(ee){U(ee),B(ee)},W=function(ee){var re=ee.disabled,le=pu(ee,S);return!re&&(le||E||a)},G=function(ee,re){var le=arguments.length>2&&arguments[2]!==void 0?arguments[2]:!1;k(ee),!a&&(re||E&&(M==="hover"||le))&&c(!1)},q=d.useMemo(function(){return s?O:y},[s,O,y]),J=d.useMemo(function(){for(var ae=[{options:q}],ee=q,re=tM(ee,S),le=function(){var ge=F[pe],Re=ee.find(function(Te,Ae){return(re[Ae]?fi(re[Ae]):Te[S.value])===ge}),ye=Re==null?void 0:Re[S.children];if(!(ye!=null&&ye.length))return 1;ee=ye,ae.push({options:ye})},pe=0;pe":C,E=n.loadingIcon,k=n.direction,O=n.notFoundContent,$=O===void 0?"Not Found":O,T=!!c,M=Dn(u,{value:p,postState:ov}),P=ve(M,2),R=P[0],A=P[1],V=d.useMemo(function(){return eM(v)},[JSON.stringify(v)]),z=aM(V,s),B=ve(z,3),_=B[0],H=B[1],j=B[2],L=rM(_,V),F=cM(T,R,H,j,L),U=ve(F,3),D=U[0],W=U[1],G=U[2],q=gn(function(se){if(A(se),m){var ne=ov(se),ae=ne.map(function(le){return Jl(le,_,V).map(function(pe){return pe.option})}),ee=T?ne:ne[0],re=T?ae:ae[0];m(ee,re)}}),J=lM(T,q,D,W,G,H,j,b),Y=gn(function(se){J(se)}),Q=d.useMemo(function(){return{options:_,fieldNames:V,values:D,halfValues:W,changeOnSelect:h,onSelect:Y,checkable:c,searchOptions:[],dropdownPrefixCls:void 0,loadData:y,expandTrigger:w,expandIcon:S,loadingIcon:E,dropdownMenuColumnStyle:void 0}},[_,V,D,W,h,Y,c,y,w,S,E]),te="".concat(o,"-panel"),ce=!_.length;return d.createElement(vc.Provider,{value:Q},d.createElement("div",{className:ie(te,(t={},K(t,"".concat(te,"-rtl"),k==="rtl"),K(t,"".concat(te,"-empty"),ce),t),a),style:i},ce?$:d.createElement(dM,{prefixCls:o,searchValue:"",multiple:T,toggleOpen:_V,open:!0,direction:k})))}var VV=["id","prefixCls","fieldNames","defaultValue","value","changeOnSelect","onChange","displayRender","checkable","autoClearSearchValue","searchValue","onSearch","showSearch","expandTrigger","options","dropdownPrefixCls","loadData","popupVisible","open","popupClassName","dropdownClassName","dropdownMenuColumnStyle","dropdownStyle","popupPlacement","placement","onDropdownVisibleChange","onPopupVisibleChange","expandIcon","loadingIcon","children","dropdownMatchSelectWidth","showCheckedStrategy","optionRender"],Md=d.forwardRef(function(e,t){var n=e.id,r=e.prefixCls,o=r===void 0?"rc-cascader":r,i=e.fieldNames,a=e.defaultValue,s=e.value,c=e.changeOnSelect,u=e.onChange,p=e.displayRender,v=e.checkable,h=e.autoClearSearchValue,m=h===void 0?!0:h,b=e.searchValue,y=e.onSearch,w=e.showSearch,C=e.expandTrigger,S=e.options,E=e.dropdownPrefixCls,k=e.loadData,O=e.popupVisible,$=e.open,T=e.popupClassName,M=e.dropdownClassName,P=e.dropdownMenuColumnStyle,R=e.dropdownStyle,A=e.popupPlacement,V=e.placement,z=e.onDropdownVisibleChange,B=e.onPopupVisibleChange,_=e.expandIcon,H=_===void 0?">":_,j=e.loadingIcon,L=e.children,F=e.dropdownMatchSelectWidth,U=F===void 0?!1:F,D=e.showCheckedStrategy,W=D===void 0?ZP:D,G=e.optionRender,q=Mt(e,VV),J=aP(n),Y=!!v,Q=Dn(a,{value:s,postState:ov}),te=ve(Q,2),ce=te[0],se=te[1],ne=d.useMemo(function(){return eM(i)},[JSON.stringify(i)]),ae=aM(ne,S),ee=ve(ae,3),re=ee[0],le=ee[1],pe=ee[2],Oe=Dn("",{value:b,postState:function(Je){return Je||""}}),ge=ve(Oe,2),Re=ge[0],ye=ge[1],Te=function(Je,He){ye(Je),He.source!=="blur"&&y&&y(Je)},Ae=NV(w),me=ve(Ae,2),Ie=me[0],Le=me[1],Be=CV(Re,re,ne,E||o,Le,c||Y),et=rM(re,ne),rt=cM(Y,ce,le,pe,et),Ze=ve(rt,3),Ve=Ze[0],Ye=Ze[1],Ge=Ze[2],Fe=d.useMemo(function(){var ft=Zl(Ve),Je=nM(ft,le,W);return[].concat(Se(Ge),Se(pe(Je)))},[Ve,le,pe,Ge,W]),we=$V(Fe,re,ne,Y,p),ze=gn(function(ft){if(se(ft),u){var Je=ov(ft),He=Je.map(function(wt){return Jl(wt,re,ne).map(function(_e){return _e.option})}),We=Y?Je:Je[0],Et=Y?He:He[0];u(We,Et)}}),Me=lM(Y,ze,Ve,Ye,Ge,le,pe,W),Pe=gn(function(ft){(!Y||m)&&ye(""),Me(ft)}),Ke=function(Je,He){if(He.type==="clear"){ze([]);return}var We=He.values[0],Et=We.valueCells;Pe(Et)},St=$!==void 0?$:O,Ft=M||T,Lt=V||A,Ct=function(Je){z==null||z(Je),B==null||B(Je)},Xt=d.useMemo(function(){return{options:re,fieldNames:ne,values:Ve,halfValues:Ye,changeOnSelect:c,onSelect:Pe,checkable:v,searchOptions:Be,dropdownPrefixCls:E,loadData:k,expandTrigger:C,expandIcon:H,loadingIcon:j,dropdownMenuColumnStyle:P,optionRender:G}},[re,ne,Ve,Ye,c,Pe,v,Be,E,k,C,H,j,P,G]),Pt=!(Re?Be:re).length,Gt=Re&&Le.matchInputWidth||Pt?{}:{minWidth:"auto"};return d.createElement(vc.Provider,{value:Xt},d.createElement(rP,$e({},q,{ref:t,id:J,prefixCls:o,autoClearSearchValue:m,dropdownMatchSelectWidth:U,dropdownStyle:Z(Z({},Gt),R),displayValues:we,onDisplayValuesChange:Ke,mode:Y?"multiple":void 0,searchValue:Re,onSearch:Te,showSearch:Ie,OptionList:FV,emptyOptions:Pt,open:St,dropdownClassName:Ft,placement:Lt,onDropdownVisibleChange:Ct,getRawInputElement:function(){return L}})))});Md.SHOW_PARENT=ZP;Md.SHOW_CHILD=JP;Md.Panel=fM;function pM(e,t){const{getPrefixCls:n,direction:r,renderEmpty:o}=d.useContext(ht),i=t||r,a=n("select",e),s=n("cascader",e);return[a,s,i,o]}function vM(e,t){return d.useMemo(()=>t?d.createElement("span",{className:`${e}-checkbox-inner`}):!1,[t])}const hM=(e,t,n)=>{let r=n;n||(r=t?d.createElement(Db,null):d.createElement(Yp,null));const o=d.createElement("span",{className:`${e}-menu-item-loading-icon`},d.createElement(Xa,{spin:!0}));return d.useMemo(()=>[r,o],[r])},WV=e=>{const{checkboxCls:t}=e,n=`${t}-wrapper`;return[{[`${t}-group`]:Object.assign(Object.assign({},jn(e)),{display:"inline-flex",flexWrap:"wrap",columnGap:e.marginXS,[`> ${e.antCls}-row`]:{flex:1}}),[n]:Object.assign(Object.assign({},jn(e)),{display:"inline-flex",alignItems:"baseline",cursor:"pointer","&:after":{display:"inline-block",width:0,overflow:"hidden",content:"'\\a0'"},[`& + ${n}`]:{marginInlineStart:0},[`&${n}-in-form-item`]:{'input[type="checkbox"]':{width:14,height:14}}}),[t]:Object.assign(Object.assign({},jn(e)),{position:"relative",whiteSpace:"nowrap",lineHeight:1,cursor:"pointer",borderRadius:e.borderRadiusSM,alignSelf:"center",[`${t}-input`]:{position:"absolute",inset:0,zIndex:1,cursor:"pointer",opacity:0,margin:0,[`&:focus-visible + ${t}-inner`]:Object.assign({},qa(e))},[`${t}-inner`]:{boxSizing:"border-box",display:"block",width:e.checkboxSize,height:e.checkboxSize,direction:"ltr",backgroundColor:e.colorBgContainer,border:`${de(e.lineWidth)} ${e.lineType} ${e.colorBorder}`,borderRadius:e.borderRadiusSM,borderCollapse:"separate",transition:`all ${e.motionDurationSlow}`,"&:after":{boxSizing:"border-box",position:"absolute",top:"50%",insetInlineStart:"25%",display:"table",width:e.calc(e.checkboxSize).div(14).mul(5).equal(),height:e.calc(e.checkboxSize).div(14).mul(8).equal(),border:`${de(e.lineWidthBold)} solid ${e.colorWhite}`,borderTop:0,borderInlineStart:0,transform:"rotate(45deg) scale(0) translate(-50%,-50%)",opacity:0,content:'""',transition:`all ${e.motionDurationFast} ${e.motionEaseInBack}, opacity ${e.motionDurationFast}`}},"& + span":{paddingInlineStart:e.paddingXS,paddingInlineEnd:e.paddingXS}})},{[` - ${n}:not(${n}-disabled), - ${t}:not(${t}-disabled) - `]:{[`&:hover ${t}-inner`]:{borderColor:e.colorPrimary}},[`${n}:not(${n}-disabled)`]:{[`&:hover ${t}-checked:not(${t}-disabled) ${t}-inner`]:{backgroundColor:e.colorPrimaryHover,borderColor:"transparent"},[`&:hover ${t}-checked:not(${t}-disabled):after`]:{borderColor:e.colorPrimaryHover}}},{[`${t}-checked`]:{[`${t}-inner`]:{backgroundColor:e.colorPrimary,borderColor:e.colorPrimary,"&:after":{opacity:1,transform:"rotate(45deg) scale(1) translate(-50%,-50%)",transition:`all ${e.motionDurationMid} ${e.motionEaseOutBack} ${e.motionDurationFast}`}}},[` - ${n}-checked:not(${n}-disabled), - ${t}-checked:not(${t}-disabled) - `]:{[`&:hover ${t}-inner`]:{backgroundColor:e.colorPrimaryHover,borderColor:"transparent"}}},{[t]:{"&-indeterminate":{[`${t}-inner`]:{backgroundColor:`${e.colorBgContainer} !important`,borderColor:`${e.colorBorder} !important`,"&:after":{top:"50%",insetInlineStart:"50%",width:e.calc(e.fontSizeLG).div(2).equal(),height:e.calc(e.fontSizeLG).div(2).equal(),backgroundColor:e.colorPrimary,border:0,transform:"translate(-50%, -50%) scale(1)",opacity:1,content:'""'}},[`&:hover ${t}-inner`]:{backgroundColor:`${e.colorBgContainer} !important`,borderColor:`${e.colorPrimary} !important`}}}},{[`${n}-disabled`]:{cursor:"not-allowed"},[`${t}-disabled`]:{[`&, ${t}-input`]:{cursor:"not-allowed",pointerEvents:"none"},[`${t}-inner`]:{background:e.colorBgContainerDisabled,borderColor:e.colorBorder,"&:after":{borderColor:e.colorTextDisabled}},"&:after":{display:"none"},"& + span":{color:e.colorTextDisabled},[`&${t}-indeterminate ${t}-inner::after`]:{background:e.colorTextDisabled}}}]};function Vw(e,t){const n=vn(t,{checkboxCls:`.${e}`,checkboxSize:t.controlInteractiveSize});return[WV(n)]}const gM=In("Checkbox",(e,t)=>{let{prefixCls:n}=t;return[Vw(n,e)]}),mM=e=>{const{prefixCls:t,componentCls:n}=e,r=`${n}-menu-item`,o=` - &${r}-expand ${r}-expand-icon, - ${r}-loading-icon -`;return[Vw(`${t}-checkbox`,e),{[n]:{"&-checkbox":{top:0,marginInlineEnd:e.paddingXS},"&-menus":{display:"flex",flexWrap:"nowrap",alignItems:"flex-start",[`&${n}-menu-empty`]:{[`${n}-menu`]:{width:"100%",height:"auto",[r]:{color:e.colorTextDisabled}}}},"&-menu":{flexGrow:1,flexShrink:0,minWidth:e.controlItemWidth,height:e.dropdownHeight,margin:0,padding:e.menuPadding,overflow:"auto",verticalAlign:"top",listStyle:"none","-ms-overflow-style":"-ms-autohiding-scrollbar","&:not(:last-child)":{borderInlineEnd:`${de(e.lineWidth)} ${e.lineType} ${e.colorSplit}`},"&-item":Object.assign(Object.assign({},Ka),{display:"flex",flexWrap:"nowrap",alignItems:"center",padding:e.optionPadding,lineHeight:e.lineHeight,cursor:"pointer",transition:`all ${e.motionDurationMid}`,borderRadius:e.borderRadiusSM,"&:hover":{background:e.controlItemBgHover},"&-disabled":{color:e.colorTextDisabled,cursor:"not-allowed","&:hover":{background:"transparent"},[o]:{color:e.colorTextDisabled}},[`&-active:not(${r}-disabled)`]:{"&, &:hover":{fontWeight:e.optionSelectedFontWeight,backgroundColor:e.optionSelectedBg}},"&-content":{flex:"auto"},[o]:{marginInlineStart:e.paddingXXS,color:e.colorTextDescription,fontSize:e.fontSizeIcon},"&-keyword":{color:e.colorHighlight}})}}}]},UV=e=>{const{componentCls:t,antCls:n}=e;return[{[t]:{width:e.controlWidth}},{[`${t}-dropdown`]:[{[`&${n}-select-dropdown`]:{padding:0}},mM(e)]},{[`${t}-dropdown-rtl`]:{direction:"rtl"}},_v(e)]},bM=e=>{const t=Math.round((e.controlHeight-e.fontSize*e.lineHeight)/2);return{controlWidth:184,controlItemWidth:111,dropdownHeight:180,optionSelectedBg:e.controlItemBgActive,optionSelectedFontWeight:e.fontWeightStrong,optionPadding:`${t}px ${e.paddingSM}px`,menuPadding:e.paddingXXS}},yM=In("Cascader",e=>[UV(e)],bM),KV=e=>{const{componentCls:t}=e;return{[`${t}-panel`]:[mM(e),{display:"inline-flex",border:`${de(e.lineWidth)} ${e.lineType} ${e.colorSplit}`,borderRadius:e.borderRadiusLG,overflowX:"auto",maxWidth:"100%",[`${t}-menus`]:{alignItems:"stretch"},[`${t}-menu`]:{height:"auto"},"&-empty":{padding:e.paddingXXS}}]}},qV=MI(["Cascader","Panel"],e=>KV(e),bM);function XV(e){const{prefixCls:t,className:n,multiple:r,rootClassName:o,notFoundContent:i,direction:a,expandIcon:s}=e,[c,u,p,v]=pM(t,a),h=br(u),[m,b,y]=yM(u,h);qV(u);const w=p==="rtl",[C,S]=hM(c,w,s),E=i||(v==null?void 0:v("Cascader"))||d.createElement(Xv,{componentName:"Cascader"}),k=vM(u,r);return m(d.createElement(fM,Object.assign({},e,{checkable:k,prefixCls:u,className:ie(n,b,o,y,h),notFoundContent:E,direction:p,expandIcon:C,loadingIcon:S})))}var GV=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);oc===0?[s]:[].concat(Se(a),[t,s]),[]),o=[];let i=0;return r.forEach((a,s)=>{const c=i+a.length;let u=e.slice(i,c);i=c,s%2===1&&(u=d.createElement("span",{className:`${n}-menu-item-keyword`,key:`separator-${s}`},u)),o.push(u)}),o}const JV=(e,t,n,r)=>{const o=[],i=e.toLowerCase();return t.forEach((a,s)=>{s!==0&&o.push(" / ");let c=a[r.label];const u=typeof c;(u==="string"||u==="number")&&(c=ZV(String(c),i,n)),o.push(c)}),o},hc=d.forwardRef((e,t)=>{var n;const{prefixCls:r,size:o,disabled:i,className:a,rootClassName:s,multiple:c,bordered:u=!0,transitionName:p,choiceTransitionName:v="",popupClassName:h,dropdownClassName:m,expandIcon:b,placement:y,showSearch:w,allowClear:C=!0,notFoundContent:S,direction:E,getPopupContainer:k,status:O,showArrow:$,builtinPlacements:T,style:M,variant:P}=e,R=GV(e,["prefixCls","size","disabled","className","rootClassName","multiple","bordered","transitionName","choiceTransitionName","popupClassName","dropdownClassName","expandIcon","placement","showSearch","allowClear","notFoundContent","direction","getPopupContainer","status","showArrow","builtinPlacements","style","variant"]),A=Ln(R,["suffixIcon"]),{getPopupContainer:V,getPrefixCls:z,popupOverflow:B,cascader:_}=d.useContext(ht),{status:H,hasFeedback:j,isFormItemInput:L,feedbackIcon:F}=d.useContext(Vr),U=$d(H,O),[D,W,G,q]=pM(r,E),J=G==="rtl",Y=z(),Q=br(D),[te,ce,se]=pP(D,Q),ne=br(W),[ae]=yM(W,ne),{compactSize:ee,compactItemClassnames:re}=lc(D,E),[le,pe]=Gv("cascader",P,u),Oe=S||(q==null?void 0:q("Cascader"))||d.createElement(Xv,{componentName:"Cascader"}),ge=ie(h||m,`${W}-dropdown`,{[`${W}-dropdown-rtl`]:G==="rtl"},s,Q,ne,ce,se),Re=d.useMemo(()=>{if(!w)return w;let we={render:JV};return typeof w=="object"&&(we=Object.assign(Object.assign({},we),w)),we},[w]),ye=Go(we=>{var ze;return(ze=o??ee)!==null&&ze!==void 0?ze:we}),Te=d.useContext(So),Ae=i??Te,[me,Ie]=hM(D,J,b),Le=vM(W,c),Be=gP(e.suffixIcon,$),{suffixIcon:et,removeIcon:rt,clearIcon:Ze}=hP(Object.assign(Object.assign({},e),{hasFeedback:j,feedbackIcon:F,showSuffixIcon:Be,multiple:c,prefixCls:D,componentName:"Cascader"})),Ve=d.useMemo(()=>y!==void 0?y:J?"bottomRight":"bottomLeft",[y,J]),Ye=C===!0?{clearIcon:Ze}:C,[Ge]=sc("SelectLike",(n=A.dropdownStyle)===null||n===void 0?void 0:n.zIndex),Fe=d.createElement(Md,Object.assign({prefixCls:D,className:ie(!r&&W,{[`${D}-lg`]:ye==="large",[`${D}-sm`]:ye==="small",[`${D}-rtl`]:J,[`${D}-${le}`]:pe,[`${D}-in-form-item`]:L},od(D,U,j),re,_==null?void 0:_.className,a,s,Q,ne,ce,se),disabled:Ae,style:Object.assign(Object.assign({},_==null?void 0:_.style),M)},A,{builtinPlacements:uP(T,B),direction:G,placement:Ve,notFoundContent:Oe,allowClear:Ye,showSearch:Re,expandIcon:me,suffixIcon:et,removeIcon:rt,loadingIcon:Ie,checkable:Le,dropdownClassName:ge,dropdownPrefixCls:r||W,dropdownStyle:Object.assign(Object.assign({},A.dropdownStyle),{zIndex:Ge}),choiceTransitionName:ra(Y,"",v),transitionName:ra(Y,"slide-up",p),getPopupContainer:k||V,ref:t}));return ae(te(Fe))}),eW=bw(hc);hc.SHOW_PARENT=QV;hc.SHOW_CHILD=YV;hc.Panel=XV;hc._InternalPanelDoNotUseOrYouWillBeFired=eW;const wM=ue.createContext(null);var tW=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{var n;const{prefixCls:r,className:o,rootClassName:i,children:a,indeterminate:s=!1,style:c,onMouseEnter:u,onMouseLeave:p,skipGroup:v=!1,disabled:h}=e,m=tW(e,["prefixCls","className","rootClassName","children","indeterminate","style","onMouseEnter","onMouseLeave","skipGroup","disabled"]),{getPrefixCls:b,direction:y,checkbox:w}=d.useContext(ht),C=d.useContext(wM),{isFormItemInput:S}=d.useContext(Vr),E=d.useContext(So),k=(n=(C==null?void 0:C.disabled)||h)!==null&&n!==void 0?n:E,O=d.useRef(m.value),$=d.useRef(null),T=Wr(t,$);d.useEffect(()=>{C==null||C.registerValue(m.value)},[]),d.useEffect(()=>{if(!v)return m.value!==O.current&&(C==null||C.cancelValue(O.current),C==null||C.registerValue(m.value),O.current=m.value),()=>C==null?void 0:C.cancelValue(m.value)},[m.value]),d.useEffect(()=>{var H;!((H=$.current)===null||H===void 0)&&H.input&&($.current.input.indeterminate=s)},[s]);const M=b("checkbox",r),P=br(M),[R,A,V]=gM(M,P),z=Object.assign({},m);C&&!v&&(z.onChange=function(){m.onChange&&m.onChange.apply(m,arguments),C.toggleOption&&C.toggleOption({label:a,value:m.value})},z.name=C.name,z.checked=C.value.includes(m.value));const B=ie(`${M}-wrapper`,{[`${M}-rtl`]:y==="rtl",[`${M}-wrapper-checked`]:z.checked,[`${M}-wrapper-disabled`]:k,[`${M}-wrapper-in-form-item`]:S},w==null?void 0:w.className,o,i,V,P,A),_=ie({[`${M}-indeterminate`]:s},jv,A);return R(d.createElement(Lv,{component:"Checkbox",disabled:k},d.createElement("label",{className:B,style:Object.assign(Object.assign({},w==null?void 0:w.style),c),onMouseEnter:u,onMouseLeave:p},d.createElement(XP,Object.assign({},z,{prefixCls:M,className:_,disabled:k,ref:T})),a!==void 0&&d.createElement("span",null,a))))},xM=d.forwardRef(nW);var rW=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{const{defaultValue:n,children:r,options:o=[],prefixCls:i,className:a,rootClassName:s,style:c,onChange:u}=e,p=rW(e,["defaultValue","children","options","prefixCls","className","rootClassName","style","onChange"]),{getPrefixCls:v,direction:h}=d.useContext(ht),[m,b]=d.useState(p.value||n||[]),[y,w]=d.useState([]);d.useEffect(()=>{"value"in p&&b(p.value||[])},[p.value]);const C=d.useMemo(()=>o.map(_=>typeof _=="string"||typeof _=="number"?{label:_,value:_}:_),[o]),S=_=>{w(H=>H.filter(j=>j!==_))},E=_=>{w(H=>[].concat(Se(H),[_]))},k=_=>{const H=m.indexOf(_.value),j=Se(m);H===-1?j.push(_.value):j.splice(H,1),"value"in p||b(j),u==null||u(j.filter(L=>y.includes(L)).sort((L,F)=>{const U=C.findIndex(W=>W.value===L),D=C.findIndex(W=>W.value===F);return U-D}))},O=v("checkbox",i),$=`${O}-group`,T=br(O),[M,P,R]=gM(O,T),A=Ln(p,["value","disabled"]),V=o.length?C.map(_=>d.createElement(xM,{prefixCls:O,key:_.value.toString(),disabled:"disabled"in _?_.disabled:p.disabled,value:_.value,checked:m.includes(_.value),onChange:_.onChange,className:`${$}-item`,style:_.style,title:_.title,id:_.id,required:_.required},_.label)):r,z={toggleOption:k,value:m,disabled:p.disabled,name:p.name,registerValue:E,cancelValue:S},B=ie($,{[`${$}-rtl`]:h==="rtl"},a,s,R,T,P);return M(d.createElement("div",Object.assign({className:B,style:c},A,{ref:t}),d.createElement(wM.Provider,{value:z},V)))}),Ns=xM;Ns.Group=oW;Ns.__ANT_CHECKBOX=!0;const SM=d.createContext({}),iW=e=>{const{componentCls:t}=e;return{[t]:{display:"flex",flexFlow:"row wrap",minWidth:0,"&::before, &::after":{display:"flex"},"&-no-wrap":{flexWrap:"nowrap"},"&-start":{justifyContent:"flex-start"},"&-center":{justifyContent:"center"},"&-end":{justifyContent:"flex-end"},"&-space-between":{justifyContent:"space-between"},"&-space-around":{justifyContent:"space-around"},"&-space-evenly":{justifyContent:"space-evenly"},"&-top":{alignItems:"flex-start"},"&-middle":{alignItems:"center"},"&-bottom":{alignItems:"flex-end"}}}},aW=e=>{const{componentCls:t}=e;return{[t]:{position:"relative",maxWidth:"100%",minHeight:1}}},sW=(e,t)=>{const{prefixCls:n,componentCls:r,gridColumns:o}=e,i={};for(let a=o;a>=0;a--)a===0?(i[`${r}${t}-${a}`]={display:"none"},i[`${r}-push-${a}`]={insetInlineStart:"auto"},i[`${r}-pull-${a}`]={insetInlineEnd:"auto"},i[`${r}${t}-push-${a}`]={insetInlineStart:"auto"},i[`${r}${t}-pull-${a}`]={insetInlineEnd:"auto"},i[`${r}${t}-offset-${a}`]={marginInlineStart:0},i[`${r}${t}-order-${a}`]={order:0}):(i[`${r}${t}-${a}`]=[{"--ant-display":"block",display:"block"},{display:"var(--ant-display)",flex:`0 0 ${a/o*100}%`,maxWidth:`${a/o*100}%`}],i[`${r}${t}-push-${a}`]={insetInlineStart:`${a/o*100}%`},i[`${r}${t}-pull-${a}`]={insetInlineEnd:`${a/o*100}%`},i[`${r}${t}-offset-${a}`]={marginInlineStart:`${a/o*100}%`},i[`${r}${t}-order-${a}`]={order:a});return i[`${r}${t}-flex`]={flex:`var(--${n}${t}-flex)`},i},jb=(e,t)=>sW(e,t),lW=(e,t,n)=>({[`@media (min-width: ${de(t)})`]:Object.assign({},jb(e,n))}),cW=()=>({}),uW=()=>({}),dW=In("Grid",iW,cW),fW=In("Grid",e=>{const t=vn(e,{gridColumns:24}),n={"-sm":t.screenSMMin,"-md":t.screenMDMin,"-lg":t.screenLGMin,"-xl":t.screenXLMin,"-xxl":t.screenXXLMin};return[aW(t),jb(t,""),jb(t,"-xs"),Object.keys(n).map(r=>lW(t,n[r],r)).reduce((r,o)=>Object.assign(Object.assign({},r),o),{})]},uW);var pW=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{const{getPrefixCls:n,direction:r}=d.useContext(ht),{gutter:o,wrap:i}=d.useContext(SM),{prefixCls:a,span:s,order:c,offset:u,push:p,pull:v,className:h,children:m,flex:b,style:y}=e,w=pW(e,["prefixCls","span","order","offset","push","pull","className","children","flex","style"]),C=n("col",a),[S,E,k]=fW(C),O={};let $={};vW.forEach(P=>{let R={};const A=e[P];typeof A=="number"?R.span=A:typeof A=="object"&&(R=A||{}),delete w[P],$=Object.assign(Object.assign({},$),{[`${C}-${P}-${R.span}`]:R.span!==void 0,[`${C}-${P}-order-${R.order}`]:R.order||R.order===0,[`${C}-${P}-offset-${R.offset}`]:R.offset||R.offset===0,[`${C}-${P}-push-${R.push}`]:R.push||R.push===0,[`${C}-${P}-pull-${R.pull}`]:R.pull||R.pull===0,[`${C}-rtl`]:r==="rtl"}),R.flex&&($[`${C}-${P}-flex`]=!0,O[`--${C}-${P}-flex`]=Ok(R.flex))});const T=ie(C,{[`${C}-${s}`]:s!==void 0,[`${C}-order-${c}`]:c,[`${C}-offset-${u}`]:u,[`${C}-push-${p}`]:p,[`${C}-pull-${v}`]:v},h,$,E,k),M={};if(o&&o[0]>0){const P=o[0]/2;M.paddingLeft=P,M.paddingRight=P}return b&&(M.flex=Ok(b),i===!1&&!M.minWidth&&(M.minWidth=0)),S(d.createElement("div",Object.assign({},w,{style:Object.assign(Object.assign(Object.assign({},M),y),O),className:T,ref:t}),m))});var hW=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{if(typeof e=="string"&&r(e),typeof e=="object")for(let i=0;i{o()},[JSON.stringify(e),t]),n}const CM=d.forwardRef((e,t)=>{const{prefixCls:n,justify:r,align:o,className:i,style:a,children:s,gutter:c=0,wrap:u}=e,p=hW(e,["prefixCls","justify","align","className","style","children","gutter","wrap"]),{getPrefixCls:v,direction:h}=d.useContext(ht),[m,b]=d.useState({xs:!0,sm:!0,md:!0,lg:!0,xl:!0,xxl:!0}),[y,w]=d.useState({xs:!1,sm:!1,md:!1,lg:!1,xl:!1,xxl:!1}),C=$k(o,y),S=$k(r,y),E=d.useRef(c),k=bP();d.useEffect(()=>{const j=k.subscribe(L=>{w(L);const F=E.current||0;(!Array.isArray(F)&&typeof F=="object"||Array.isArray(F)&&(typeof F[0]=="object"||typeof F[1]=="object"))&&b(L)});return()=>k.unsubscribe(j)},[]);const O=()=>{const j=[void 0,void 0];return(Array.isArray(c)?c:[c,void 0]).forEach((F,U)=>{if(typeof F=="object")for(let D=0;D0?R[0]/-2:void 0;z&&(V.marginLeft=z,V.marginRight=z);const[B,_]=R;V.rowGap=_;const H=d.useMemo(()=>({gutter:[B,_],wrap:u}),[B,_,u]);return T(d.createElement(SM.Provider,{value:H},d.createElement("div",Object.assign({},p,{className:A,style:Object.assign(Object.assign({},V),a),ref:t}),s)))});function gW(e){return!!(e.addonBefore||e.addonAfter)}function mW(e){return!!(e.prefix||e.suffix||e.allowClear)}function Ik(e,t,n){var r=t.cloneNode(!0),o=Object.create(e,{target:{value:r},currentTarget:{value:r}});return r.value=n,typeof t.selectionStart=="number"&&typeof t.selectionEnd=="number"&&(r.selectionStart=t.selectionStart,r.selectionEnd=t.selectionEnd),r.setSelectionRange=function(){t.setSelectionRange.apply(t,arguments)},o}function av(e,t,n,r){if(n){var o=t;if(t.type==="click"){o=Ik(t,e,""),n(o);return}if(e.type!=="file"&&r!==void 0){o=Ik(t,e,r),n(o);return}n(o)}}function bW(e,t){if(e){e.focus(t);var n=t||{},r=n.cursor;if(r){var o=e.value.length;switch(r){case"start":e.setSelectionRange(0,0);break;case"end":e.setSelectionRange(o,o);break;default:e.setSelectionRange(0,o)}}}}var EM=ue.forwardRef(function(e,t){var n,r,o=e.inputElement,i=e.children,a=e.prefixCls,s=e.prefix,c=e.suffix,u=e.addonBefore,p=e.addonAfter,v=e.className,h=e.style,m=e.disabled,b=e.readOnly,y=e.focused,w=e.triggerFocus,C=e.allowClear,S=e.value,E=e.handleReset,k=e.hidden,O=e.classes,$=e.classNames,T=e.dataAttrs,M=e.styles,P=e.components,R=e.onClear,A=i??o,V=(P==null?void 0:P.affixWrapper)||"span",z=(P==null?void 0:P.groupWrapper)||"span",B=(P==null?void 0:P.wrapper)||"span",_=(P==null?void 0:P.groupAddon)||"span",H=d.useRef(null),j=function(re){var le;(le=H.current)!==null&&le!==void 0&&le.contains(re.target)&&(w==null||w())},L=mW(e),F=d.cloneElement(A,{value:S,className:ie(A.props.className,!L&&($==null?void 0:$.variant))||null}),U=d.useRef(null);if(ue.useImperativeHandle(t,function(){return{nativeElement:U.current||H.current}}),L){var D=null;if(C){var W=!m&&!b&&S,G="".concat(a,"-clear-icon"),q=st(C)==="object"&&C!==null&&C!==void 0&&C.clearIcon?C.clearIcon:"✖";D=ue.createElement("span",{onClick:function(re){E==null||E(re),R==null||R()},onMouseDown:function(re){return re.preventDefault()},className:ie(G,K(K({},"".concat(G,"-hidden"),!W),"".concat(G,"-has-suffix"),!!c)),role:"button",tabIndex:-1},q)}var J="".concat(a,"-affix-wrapper"),Y=ie(J,K(K(K(K(K({},"".concat(a,"-disabled"),m),"".concat(J,"-disabled"),m),"".concat(J,"-focused"),y),"".concat(J,"-readonly"),b),"".concat(J,"-input-with-clear-btn"),c&&C&&S),O==null?void 0:O.affixWrapper,$==null?void 0:$.affixWrapper,$==null?void 0:$.variant),Q=(c||C)&&ue.createElement("span",{className:ie("".concat(a,"-suffix"),$==null?void 0:$.suffix),style:M==null?void 0:M.suffix},D,c);F=ue.createElement(V,$e({className:Y,style:M==null?void 0:M.affixWrapper,onClick:j},T==null?void 0:T.affixWrapper,{ref:H}),s&&ue.createElement("span",{className:ie("".concat(a,"-prefix"),$==null?void 0:$.prefix),style:M==null?void 0:M.prefix},s),F,Q)}if(gW(e)){var te="".concat(a,"-group"),ce="".concat(te,"-addon"),se="".concat(te,"-wrapper"),ne=ie("".concat(a,"-wrapper"),te,O==null?void 0:O.wrapper,$==null?void 0:$.wrapper),ae=ie(se,K({},"".concat(se,"-disabled"),m),O==null?void 0:O.group,$==null?void 0:$.groupWrapper);F=ue.createElement(z,{className:ae,ref:U},ue.createElement(B,{className:ne},u&&ue.createElement(_,{className:ce},u),F,p&&ue.createElement(_,{className:ce},p)))}return ue.cloneElement(F,{className:ie((n=F.props)===null||n===void 0?void 0:n.className,v)||null,style:Z(Z({},(r=F.props)===null||r===void 0?void 0:r.style),h),hidden:k})}),yW=["show"];function kM(e,t){return d.useMemo(function(){var n={};t&&(n.show=st(t)==="object"&&t.formatter?t.formatter:!!t),n=Z(Z({},n),e);var r=n,o=r.show,i=Mt(r,yW);return Z(Z({},i),{},{show:!!o,showFormatter:typeof o=="function"?o:void 0,strategy:i.strategy||function(a){return a.length}})},[e,t])}var wW=["autoComplete","onChange","onFocus","onBlur","onPressEnter","onKeyDown","onKeyUp","prefixCls","disabled","htmlSize","className","maxLength","suffix","showCount","count","type","classes","classNames","styles","onCompositionStart","onCompositionEnd"],xW=d.forwardRef(function(e,t){var n=e.autoComplete,r=e.onChange,o=e.onFocus,i=e.onBlur,a=e.onPressEnter,s=e.onKeyDown,c=e.onKeyUp,u=e.prefixCls,p=u===void 0?"rc-input":u,v=e.disabled,h=e.htmlSize,m=e.className,b=e.maxLength,y=e.suffix,w=e.showCount,C=e.count,S=e.type,E=S===void 0?"text":S,k=e.classes,O=e.classNames,$=e.styles,T=e.onCompositionStart,M=e.onCompositionEnd,P=Mt(e,wW),R=d.useState(!1),A=ve(R,2),V=A[0],z=A[1],B=d.useRef(!1),_=d.useRef(!1),H=d.useRef(null),j=d.useRef(null),L=function(Ie){H.current&&bW(H.current,Ie)},F=Dn(e.defaultValue,{value:e.value}),U=ve(F,2),D=U[0],W=U[1],G=D==null?"":String(D),q=d.useState(null),J=ve(q,2),Y=J[0],Q=J[1],te=kM(C,w),ce=te.max||b,se=te.strategy(G),ne=!!ce&&se>ce;d.useImperativeHandle(t,function(){var me;return{focus:L,blur:function(){var Le;(Le=H.current)===null||Le===void 0||Le.blur()},setSelectionRange:function(Le,Be,et){var rt;(rt=H.current)===null||rt===void 0||rt.setSelectionRange(Le,Be,et)},select:function(){var Le;(Le=H.current)===null||Le===void 0||Le.select()},input:H.current,nativeElement:((me=j.current)===null||me===void 0?void 0:me.nativeElement)||H.current}}),d.useEffect(function(){z(function(me){return me&&v?!1:me})},[v]);var ae=function(Ie,Le,Be){var et=Le;if(!B.current&&te.exceedFormatter&&te.max&&te.strategy(Le)>te.max){if(et=te.exceedFormatter(Le,{max:te.max}),Le!==et){var rt,Ze;Q([((rt=H.current)===null||rt===void 0?void 0:rt.selectionStart)||0,((Ze=H.current)===null||Ze===void 0?void 0:Ze.selectionEnd)||0])}}else if(Be.source==="compositionEnd")return;W(et),H.current&&av(H.current,Ie,r,et)};d.useEffect(function(){if(Y){var me;(me=H.current)===null||me===void 0||me.setSelectionRange.apply(me,Se(Y))}},[Y]);var ee=function(Ie){ae(Ie,Ie.target.value,{source:"change"})},re=function(Ie){B.current=!1,ae(Ie,Ie.currentTarget.value,{source:"compositionEnd"}),M==null||M(Ie)},le=function(Ie){a&&Ie.key==="Enter"&&!_.current&&(_.current=!0,a(Ie)),s==null||s(Ie)},pe=function(Ie){Ie.key==="Enter"&&(_.current=!1),c==null||c(Ie)},Oe=function(Ie){z(!0),o==null||o(Ie)},ge=function(Ie){z(!1),i==null||i(Ie)},Re=function(Ie){W(""),L(),H.current&&av(H.current,Ie,r)},ye=ne&&"".concat(p,"-out-of-range"),Te=function(){var Ie=Ln(e,["prefixCls","onPressEnter","addonBefore","addonAfter","prefix","suffix","allowClear","defaultValue","showCount","count","classes","htmlSize","styles","classNames","onClear"]);return ue.createElement("input",$e({autoComplete:n},Ie,{onChange:ee,onFocus:Oe,onBlur:ge,onKeyDown:le,onKeyUp:pe,className:ie(p,K({},"".concat(p,"-disabled"),v),O==null?void 0:O.input),style:$==null?void 0:$.input,ref:H,size:h,type:E,onCompositionStart:function(Be){B.current=!0,T==null||T(Be)},onCompositionEnd:re}))},Ae=function(){var Ie=Number(ce)>0;if(y||te.show){var Le=te.showFormatter?te.showFormatter({value:G,count:se,maxLength:ce}):"".concat(se).concat(Ie?" / ".concat(ce):"");return ue.createElement(ue.Fragment,null,te.show&&ue.createElement("span",{className:ie("".concat(p,"-show-count-suffix"),K({},"".concat(p,"-show-count-has-suffix"),!!y),O==null?void 0:O.count),style:Z({},$==null?void 0:$.count)},Le),y)}return null};return ue.createElement(EM,$e({},P,{prefixCls:p,className:ie(m,ye),handleReset:Re,value:G,focused:V,triggerFocus:L,suffix:Ae(),disabled:v,classes:k,classNames:O,styles:$}),Te())});const SW=e=>{const{getPrefixCls:t,direction:n}=d.useContext(ht),{prefixCls:r,className:o}=e,i=t("input-group",r),a=t("input"),[s,c]=Fw(a),u=ie(i,{[`${i}-lg`]:e.size==="large",[`${i}-sm`]:e.size==="small",[`${i}-compact`]:e.compact,[`${i}-rtl`]:n==="rtl"},c,o),p=d.useContext(Vr),v=d.useMemo(()=>Object.assign(Object.assign({},p),{isFormItemInput:!1}),[p]);return s(d.createElement("span",{className:u,style:e.style,onMouseEnter:e.onMouseEnter,onMouseLeave:e.onMouseLeave,onFocus:e.onFocus,onBlur:e.onBlur},d.createElement(Vr.Provider,{value:v},e.children)))},OM=e=>{let t;return typeof e=="object"&&(e!=null&&e.clearIcon)?t=e:e&&(t={clearIcon:ue.createElement(bd,null)}),t};function $M(e,t){const n=d.useRef([]),r=()=>{n.current.push(setTimeout(()=>{var o,i,a,s;!((o=e.current)===null||o===void 0)&&o.input&&((i=e.current)===null||i===void 0?void 0:i.input.getAttribute("type"))==="password"&&(!((a=e.current)===null||a===void 0)&&a.input.hasAttribute("value"))&&((s=e.current)===null||s===void 0||s.input.removeAttribute("value"))}))};return d.useEffect(()=>(t&&r(),()=>n.current.forEach(o=>{o&&clearTimeout(o)})),[]),r}function CW(e){return!!(e.prefix||e.suffix||e.allowClear||e.showCount)}var EW=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{var n;const{prefixCls:r,bordered:o=!0,status:i,size:a,disabled:s,onBlur:c,onFocus:u,suffix:p,allowClear:v,addonAfter:h,addonBefore:m,className:b,style:y,styles:w,rootClassName:C,onChange:S,classNames:E,variant:k}=e,O=EW(e,["prefixCls","bordered","status","size","disabled","onBlur","onFocus","suffix","allowClear","addonAfter","addonBefore","className","style","styles","rootClassName","onChange","classNames","variant"]),{getPrefixCls:$,direction:T,input:M}=ue.useContext(ht),P=$("input",r),R=d.useRef(null),A=br(P),[V,z,B]=Fw(P,A),{compactSize:_,compactItemClassnames:H}=lc(P,T),j=Go(ee=>{var re;return(re=a??_)!==null&&re!==void 0?re:ee}),L=ue.useContext(So),F=s??L,{status:U,hasFeedback:D,feedbackIcon:W}=d.useContext(Vr),G=$d(U,i),q=CW(e)||!!D;d.useRef(q);const J=$M(R,!0),Y=ee=>{J(),c==null||c(ee)},Q=ee=>{J(),u==null||u(ee)},te=ee=>{J(),S==null||S(ee)},ce=(D||p)&&ue.createElement(ue.Fragment,null,p,D&&W),se=OM(v??(M==null?void 0:M.allowClear)),[ne,ae]=Gv("input",k,o);return V(ue.createElement(xW,Object.assign({ref:Wr(t,R),prefixCls:P,autoComplete:M==null?void 0:M.autoComplete},O,{disabled:F,onBlur:Y,onFocus:Q,style:Object.assign(Object.assign({},M==null?void 0:M.style),y),styles:Object.assign(Object.assign({},M==null?void 0:M.styles),w),suffix:ce,allowClear:se,className:ie(b,C,B,A,H,M==null?void 0:M.className),onChange:te,addonBefore:m&&ue.createElement(nd,{form:!0,space:!0},m),addonAfter:h&&ue.createElement(nd,{form:!0,space:!0},h),classNames:Object.assign(Object.assign(Object.assign({},E),M==null?void 0:M.classNames),{input:ie({[`${P}-sm`]:j==="small",[`${P}-lg`]:j==="large",[`${P}-rtl`]:T==="rtl"},E==null?void 0:E.input,(n=M==null?void 0:M.classNames)===null||n===void 0?void 0:n.input,z),variant:ie({[`${P}-${ne}`]:ae},od(P,G)),affixWrapper:ie({[`${P}-affix-wrapper-sm`]:j==="small",[`${P}-affix-wrapper-lg`]:j==="large",[`${P}-affix-wrapper-rtl`]:T==="rtl"},z),wrapper:ie({[`${P}-group-rtl`]:T==="rtl"},z),groupWrapper:ie({[`${P}-group-wrapper-sm`]:j==="small",[`${P}-group-wrapper-lg`]:j==="large",[`${P}-group-wrapper-rtl`]:T==="rtl",[`${P}-group-wrapper-${ne}`]:ae},od(`${P}-group-wrapper`,G,D),z)})})))}),OW=e=>{const{componentCls:t,paddingXS:n}=e;return{[t]:{display:"inline-flex",alignItems:"center",flexWrap:"nowrap",columnGap:n,"&-rtl":{direction:"rtl"},[`${t}-input`]:{textAlign:"center",paddingInline:e.paddingXXS},[`&${t}-sm ${t}-input`]:{paddingInline:e.calc(e.paddingXXS).div(2).equal()},[`&${t}-lg ${t}-input`]:{paddingInline:e.paddingXS}}}},$W=In(["Input","OTP"],e=>{const t=vn(e,Lw(e));return[OW(t)]},Bw);var IW=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{const{value:n,onChange:r,onActiveChange:o,index:i,mask:a}=e,s=IW(e,["value","onChange","onActiveChange","index","mask"]),c=n&&typeof a=="string"?a:n,u=b=>{r(i,b.target.value)},p=d.useRef(null);d.useImperativeHandle(t,()=>p.current);const v=()=>{bn(()=>{var b;const y=(b=p.current)===null||b===void 0?void 0:b.input;document.activeElement===y&&y&&y.select()})},h=b=>{let{key:y}=b;y==="ArrowLeft"?o(i-1):y==="ArrowRight"&&o(i+1),v()},m=b=>{b.key==="Backspace"&&!n&&o(i-1),v()};return d.createElement(rh,Object.assign({type:a===!0?"password":"text"},s,{ref:p,value:c,onInput:u,onFocus:v,onKeyDown:h,onKeyUp:m,onMouseDown:v,onMouseUp:v}))});var PW=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{const{prefixCls:n,length:r=6,size:o,defaultValue:i,value:a,onChange:s,formatter:c,variant:u,disabled:p,status:v,autoFocus:h,mask:m,type:b}=e,y=PW(e,["prefixCls","length","size","defaultValue","value","onChange","formatter","variant","disabled","status","autoFocus","mask","type"]),{getPrefixCls:w,direction:C}=d.useContext(ht),S=w("otp",n),E=Gr(y,{aria:!0,data:!0,attr:!0}),k=br(S),[O,$,T]=$W(S,k),M=Go(W=>o??W),P=d.useContext(Vr),R=$d(P.status,v),A=d.useMemo(()=>Object.assign(Object.assign({},P),{status:R,hasFeedback:!1,feedbackIcon:null}),[P,R]),V=d.useRef(null),z=d.useRef({});d.useImperativeHandle(t,()=>({focus:()=>{var W;(W=z.current[0])===null||W===void 0||W.focus()},blur:()=>{var W;for(let G=0;Gc?c(W):W,[_,H]=d.useState(Jf(B(i||"")));d.useEffect(()=>{a!==void 0&&H(Jf(a))},[a]);const j=gn(W=>{H(W),s&&W.length===r&&W.every(G=>G)&&W.some((G,q)=>_[q]!==G)&&s(W.join(""))}),L=gn((W,G)=>{let q=Se(_);for(let Y=0;Y=0&&!q[Y];Y-=1)q.pop();const J=B(q.map(Y=>Y||" ").join(""));return q=Jf(J).map((Y,Q)=>Y===" "&&!q[Q]?q[Q]:Y),q}),F=(W,G)=>{var q;const J=L(W,G),Y=Math.min(W+G.length,r-1);Y!==W&&((q=z.current[Y])===null||q===void 0||q.focus()),j(J)},U=W=>{var G;(G=z.current[W])===null||G===void 0||G.focus()},D={variant:u,disabled:p,status:R,mask:m,type:b};return O(d.createElement("div",Object.assign({},E,{ref:V,className:ie(S,{[`${S}-sm`]:M==="small",[`${S}-lg`]:M==="large",[`${S}-rtl`]:C==="rtl"},T,$)}),d.createElement(Vr.Provider,{value:A},Array.from({length:r}).map((W,G)=>{const q=`otp-${G}`,J=_[G]||"";return d.createElement(TW,Object.assign({ref:Y=>{z.current[G]=Y},key:q,index:G,size:M,htmlSize:1,className:`${S}-input`,onChange:F,value:J,onActiveChange:U,autoFocus:G===0&&h},D))}))))});var NW={icon:{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M942.2 486.2Q889.47 375.11 816.7 305l-50.88 50.88C807.31 395.53 843.45 447.4 874.7 512 791.5 684.2 673.4 766 512 766q-72.67 0-133.87-22.38L323 798.75Q408 838 512 838q288.3 0 430.2-300.3a60.29 60.29 0 000-51.5zm-63.57-320.64L836 122.88a8 8 0 00-11.32 0L715.31 232.2Q624.86 186 512 186q-288.3 0-430.2 300.3a60.3 60.3 0 000 51.5q56.69 119.4 136.5 191.41L112.48 835a8 8 0 000 11.31L155.17 889a8 8 0 0011.31 0l712.15-712.12a8 8 0 000-11.32zM149.3 512C232.6 339.8 350.7 258 512 258c54.54 0 104.13 9.36 149.12 28.39l-70.3 70.3a176 176 0 00-238.13 238.13l-83.42 83.42C223.1 637.49 183.3 582.28 149.3 512zm246.7 0a112.11 112.11 0 01146.2-106.69L401.31 546.2A112 112 0 01396 512z"}},{tag:"path",attrs:{d:"M508 624c-3.46 0-6.87-.16-10.25-.47l-52.82 52.82a176.09 176.09 0 00227.42-227.42l-52.82 52.82c.31 3.38.47 6.79.47 10.25a111.94 111.94 0 01-112 112z"}}]},name:"eye-invisible",theme:"outlined"},RW=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:NW}))},DW=d.forwardRef(RW),jW={icon:{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M942.2 486.2C847.4 286.5 704.1 186 512 186c-192.2 0-335.4 100.5-430.2 300.3a60.3 60.3 0 000 51.5C176.6 737.5 319.9 838 512 838c192.2 0 335.4-100.5 430.2-300.3 7.7-16.2 7.7-35 0-51.5zM512 766c-161.3 0-279.4-81.8-362.7-254C232.6 339.8 350.7 258 512 258c161.3 0 279.4 81.8 362.7 254C791.5 684.2 673.4 766 512 766zm-4-430c-97.2 0-176 78.8-176 176s78.8 176 176 176 176-78.8 176-176-78.8-176-176-176zm0 288c-61.9 0-112-50.1-112-112s50.1-112 112-112 112 50.1 112 112-50.1 112-112 112z"}}]},name:"eye",theme:"outlined"},LW=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:jW}))},IM=d.forwardRef(LW),BW=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);oe?d.createElement(IM,null):d.createElement(DW,null),zW={click:"onClick",hover:"onMouseOver"},HW=d.forwardRef((e,t)=>{const{disabled:n,action:r="click",visibilityToggle:o=!0,iconRender:i=AW}=e,a=d.useContext(So),s=n??a,c=typeof o=="object"&&o.visible!==void 0,[u,p]=d.useState(()=>c?o.visible:!1),v=d.useRef(null);d.useEffect(()=>{c&&p(o.visible)},[c,o]);const h=$M(v),m=()=>{s||(u&&h(),p(R=>{var A;const V=!R;return typeof o=="object"&&((A=o.onVisibleChange)===null||A===void 0||A.call(o,V)),V}))},b=R=>{const A=zW[r]||"",V=i(u),z={[A]:m,className:`${R}-icon`,key:"passwordIcon",onMouseDown:B=>{B.preventDefault()},onMouseUp:B=>{B.preventDefault()}};return d.cloneElement(d.isValidElement(V)?V:d.createElement("span",null,V),z)},{className:y,prefixCls:w,inputPrefixCls:C,size:S}=e,E=BW(e,["className","prefixCls","inputPrefixCls","size"]),{getPrefixCls:k}=d.useContext(ht),O=k("input",C),$=k("input-password",w),T=o&&b($),M=ie($,y,{[`${$}-${S}`]:!!S}),P=Object.assign(Object.assign({},Ln(E,["suffix","iconRender","visibilityToggle"])),{type:u?"text":"password",className:M,prefixCls:O,suffix:T});return S&&(P.size=S),d.createElement(rh,Object.assign({ref:Wr(t,v)},P))});var FW=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{const{prefixCls:n,inputPrefixCls:r,className:o,size:i,suffix:a,enterButton:s=!1,addonAfter:c,loading:u,disabled:p,onSearch:v,onChange:h,onCompositionStart:m,onCompositionEnd:b}=e,y=FW(e,["prefixCls","inputPrefixCls","className","size","suffix","enterButton","addonAfter","loading","disabled","onSearch","onChange","onCompositionStart","onCompositionEnd"]),{getPrefixCls:w,direction:C}=d.useContext(ht),S=d.useRef(!1),E=w("input-search",n),k=w("input",r),{compactSize:O}=lc(E,C),$=Go(U=>{var D;return(D=i??O)!==null&&D!==void 0?D:U}),T=d.useRef(null),M=U=>{U!=null&&U.target&&U.type==="click"&&v&&v(U.target.value,U,{source:"clear"}),h==null||h(U)},P=U=>{var D;document.activeElement===((D=T.current)===null||D===void 0?void 0:D.input)&&U.preventDefault()},R=U=>{var D,W;v&&v((W=(D=T.current)===null||D===void 0?void 0:D.input)===null||W===void 0?void 0:W.value,U,{source:"input"})},A=U=>{S.current||u||R(U)},V=typeof s=="boolean"?d.createElement(Ew,null):null,z=`${E}-button`;let B;const _=s||{},H=_.type&&_.type.__ANT_BUTTON===!0;H||_.type==="button"?B=Dr(_,Object.assign({onMouseDown:P,onClick:U=>{var D,W;(W=(D=_==null?void 0:_.props)===null||D===void 0?void 0:D.onClick)===null||W===void 0||W.call(D,U),R(U)},key:"enterButton"},H?{className:z,size:$}:{})):B=d.createElement(jr,{className:z,type:s?"primary":void 0,size:$,disabled:p,key:"enterButton",onMouseDown:P,onClick:R,loading:u,icon:V},s),c&&(B=[B,Dr(c,{key:"addonAfter"})]);const j=ie(E,{[`${E}-rtl`]:C==="rtl",[`${E}-${$}`]:!!$,[`${E}-with-button`]:!!s},o),L=U=>{S.current=!0,m==null||m(U)},F=U=>{S.current=!1,b==null||b(U)};return d.createElement(rh,Object.assign({ref:Wr(T,t),onPressEnter:A},y,{size:$,onCompositionStart:L,onCompositionEnd:F,prefixCls:k,addonAfter:B,suffix:a,onChange:M,className:j,disabled:p}))});var VW=` - min-height:0 !important; - max-height:none !important; - height:0 !important; - visibility:hidden !important; - overflow:hidden !important; - position:absolute !important; - z-index:-1000 !important; - top:0 !important; - right:0 !important; - pointer-events: none !important; -`,WW=["letter-spacing","line-height","padding-top","padding-bottom","font-family","font-weight","font-size","font-variant","text-rendering","text-transform","width","text-indent","padding-left","padding-right","border-width","box-sizing","word-break","white-space"],jm={},go;function UW(e){var t=arguments.length>1&&arguments[1]!==void 0?arguments[1]:!1,n=e.getAttribute("id")||e.getAttribute("data-reactid")||e.getAttribute("name");if(t&&jm[n])return jm[n];var r=window.getComputedStyle(e),o=r.getPropertyValue("box-sizing")||r.getPropertyValue("-moz-box-sizing")||r.getPropertyValue("-webkit-box-sizing"),i=parseFloat(r.getPropertyValue("padding-bottom"))+parseFloat(r.getPropertyValue("padding-top")),a=parseFloat(r.getPropertyValue("border-bottom-width"))+parseFloat(r.getPropertyValue("border-top-width")),s=WW.map(function(u){return"".concat(u,":").concat(r.getPropertyValue(u))}).join(";"),c={sizingStyle:s,paddingSize:i,borderSize:a,boxSizing:o};return t&&n&&(jm[n]=c),c}function KW(e){var t=arguments.length>1&&arguments[1]!==void 0?arguments[1]:!1,n=arguments.length>2&&arguments[2]!==void 0?arguments[2]:null,r=arguments.length>3&&arguments[3]!==void 0?arguments[3]:null;go||(go=document.createElement("textarea"),go.setAttribute("tab-index","-1"),go.setAttribute("aria-hidden","true"),go.setAttribute("name","hiddenTextarea"),document.body.appendChild(go)),e.getAttribute("wrap")?go.setAttribute("wrap",e.getAttribute("wrap")):go.removeAttribute("wrap");var o=UW(e,t),i=o.paddingSize,a=o.borderSize,s=o.boxSizing,c=o.sizingStyle;go.setAttribute("style","".concat(c,";").concat(VW)),go.value=e.value||e.placeholder||"";var u=void 0,p=void 0,v,h=go.scrollHeight;if(s==="border-box"?h+=a:s==="content-box"&&(h-=i),n!==null||r!==null){go.value=" ";var m=go.scrollHeight-i;n!==null&&(u=m*n,s==="border-box"&&(u=u+i+a),h=Math.max(u,h)),r!==null&&(p=m*r,s==="border-box"&&(p=p+i+a),v=h>p?"":"hidden",h=Math.min(p,h))}var b={height:h,overflowY:v,resize:"none"};return u&&(b.minHeight=u),p&&(b.maxHeight=p),b}var qW=["prefixCls","defaultValue","value","autoSize","onResize","className","style","disabled","onChange","onInternalAutoSize"],Lm=0,Bm=1,Am=2,XW=d.forwardRef(function(e,t){var n=e,r=n.prefixCls,o=n.defaultValue,i=n.value,a=n.autoSize,s=n.onResize,c=n.className,u=n.style,p=n.disabled,v=n.onChange;n.onInternalAutoSize;var h=Mt(n,qW),m=Dn(o,{value:i,postState:function(q){return q??""}}),b=ve(m,2),y=b[0],w=b[1],C=function(q){w(q.target.value),v==null||v(q)},S=d.useRef();d.useImperativeHandle(t,function(){return{textArea:S.current}});var E=d.useMemo(function(){return a&&st(a)==="object"?[a.minRows,a.maxRows]:[]},[a]),k=ve(E,2),O=k[0],$=k[1],T=!!a,M=function(){try{if(document.activeElement===S.current){var q=S.current,J=q.selectionStart,Y=q.selectionEnd,Q=q.scrollTop;S.current.setSelectionRange(J,Y),S.current.scrollTop=Q}}catch{}},P=d.useState(Am),R=ve(P,2),A=R[0],V=R[1],z=d.useState(),B=ve(z,2),_=B[0],H=B[1],j=function(){V(Lm)};sn(function(){T&&j()},[i,O,$,T]),sn(function(){if(A===Lm)V(Bm);else if(A===Bm){var G=KW(S.current,!1,O,$);V(Am),H(G)}else M()},[A]);var L=d.useRef(),F=function(){bn.cancel(L.current)},U=function(q){A===Am&&(s==null||s(q),a&&(F(),L.current=bn(function(){j()})))};d.useEffect(function(){return F},[]);var D=T?_:null,W=Z(Z({},u),D);return(A===Lm||A===Bm)&&(W.overflowY="hidden",W.overflowX="hidden"),d.createElement(qo,{onResize:U,disabled:!(a||s)},d.createElement("textarea",$e({},h,{ref:S,style:W,className:ie(r,c,K({},"".concat(r,"-disabled"),p)),disabled:p,value:y,onChange:C})))}),GW=["defaultValue","value","onFocus","onBlur","onChange","allowClear","maxLength","onCompositionStart","onCompositionEnd","suffix","prefixCls","showCount","count","className","style","disabled","hidden","classNames","styles","onResize","onClear","onPressEnter","readOnly","autoSize","onKeyDown"],YW=ue.forwardRef(function(e,t){var n,r=e.defaultValue,o=e.value,i=e.onFocus,a=e.onBlur,s=e.onChange,c=e.allowClear,u=e.maxLength,p=e.onCompositionStart,v=e.onCompositionEnd,h=e.suffix,m=e.prefixCls,b=m===void 0?"rc-textarea":m,y=e.showCount,w=e.count,C=e.className,S=e.style,E=e.disabled,k=e.hidden,O=e.classNames,$=e.styles,T=e.onResize,M=e.onClear,P=e.onPressEnter,R=e.readOnly,A=e.autoSize,V=e.onKeyDown,z=Mt(e,GW),B=Dn(r,{value:o,defaultValue:r}),_=ve(B,2),H=_[0],j=_[1],L=H==null?"":String(H),F=ue.useState(!1),U=ve(F,2),D=U[0],W=U[1],G=ue.useRef(!1),q=ue.useState(null),J=ve(q,2),Y=J[0],Q=J[1],te=d.useRef(null),ce=d.useRef(null),se=function(){var we;return(we=ce.current)===null||we===void 0?void 0:we.textArea},ne=function(){se().focus()};d.useImperativeHandle(t,function(){var Fe;return{resizableTextArea:ce.current,focus:ne,blur:function(){se().blur()},nativeElement:((Fe=te.current)===null||Fe===void 0?void 0:Fe.nativeElement)||se()}}),d.useEffect(function(){W(function(Fe){return!E&&Fe})},[E]);var ae=ue.useState(null),ee=ve(ae,2),re=ee[0],le=ee[1];ue.useEffect(function(){if(re){var Fe;(Fe=se()).setSelectionRange.apply(Fe,Se(re))}},[re]);var pe=kM(w,y),Oe=(n=pe.max)!==null&&n!==void 0?n:u,ge=Number(Oe)>0,Re=pe.strategy(L),ye=!!Oe&&Re>Oe,Te=function(we,ze){var Me=ze;!G.current&&pe.exceedFormatter&&pe.max&&pe.strategy(ze)>pe.max&&(Me=pe.exceedFormatter(ze,{max:pe.max}),ze!==Me&&le([se().selectionStart||0,se().selectionEnd||0])),j(Me),av(we.currentTarget,we,s,Me)},Ae=function(we){G.current=!0,p==null||p(we)},me=function(we){G.current=!1,Te(we,we.currentTarget.value),v==null||v(we)},Ie=function(we){Te(we,we.target.value)},Le=function(we){we.key==="Enter"&&P&&P(we),V==null||V(we)},Be=function(we){W(!0),i==null||i(we)},et=function(we){W(!1),a==null||a(we)},rt=function(we){j(""),ne(),av(se(),we,s)},Ze=h,Ve;pe.show&&(pe.showFormatter?Ve=pe.showFormatter({value:L,count:Re,maxLength:Oe}):Ve="".concat(Re).concat(ge?" / ".concat(Oe):""),Ze=ue.createElement(ue.Fragment,null,Ze,ue.createElement("span",{className:ie("".concat(b,"-data-count"),O==null?void 0:O.count),style:$==null?void 0:$.count},Ve)));var Ye=function(we){var ze;T==null||T(we),(ze=se())!==null&&ze!==void 0&&ze.style.height&&Q(!0)},Ge=!A&&!y&&!c;return ue.createElement(EM,{ref:te,value:L,allowClear:c,handleReset:rt,suffix:Ze,prefixCls:b,classNames:Z(Z({},O),{},{affixWrapper:ie(O==null?void 0:O.affixWrapper,K(K({},"".concat(b,"-show-count"),y),"".concat(b,"-textarea-allow-clear"),c))}),disabled:E,focused:D,className:ie(C,ye&&"".concat(b,"-out-of-range")),style:Z(Z({},S),Y&&!Ge?{height:"auto"}:{}),dataAttrs:{affixWrapper:{"data-count":typeof Ve=="string"?Ve:void 0}},hidden:k,readOnly:R,onClear:M},ue.createElement(XW,$e({},z,{autoSize:A,maxLength:u,onKeyDown:Le,onChange:Ie,onFocus:Be,onBlur:et,onCompositionStart:Ae,onCompositionEnd:me,className:ie(O==null?void 0:O.textarea),style:Z(Z({},$==null?void 0:$.textarea),{},{resize:S==null?void 0:S.resize}),disabled:E,prefixCls:b,onResize:Ye,ref:ce,readOnly:R})))}),QW=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{var n,r;const{prefixCls:o,bordered:i=!0,size:a,disabled:s,status:c,allowClear:u,classNames:p,rootClassName:v,className:h,style:m,styles:b,variant:y}=e,w=QW(e,["prefixCls","bordered","size","disabled","status","allowClear","classNames","rootClassName","className","style","styles","variant"]),{getPrefixCls:C,direction:S,textArea:E}=d.useContext(ht),k=Go(a),O=d.useContext(So),$=s??O,{status:T,hasFeedback:M,feedbackIcon:P}=d.useContext(Vr),R=$d(T,c),A=d.useRef(null);d.useImperativeHandle(t,()=>{var U;return{resizableTextArea:(U=A.current)===null||U===void 0?void 0:U.resizableTextArea,focus:D=>{var W,G;kW((G=(W=A.current)===null||W===void 0?void 0:W.resizableTextArea)===null||G===void 0?void 0:G.textArea,D)},blur:()=>{var D;return(D=A.current)===null||D===void 0?void 0:D.blur()}}});const V=C("input",o),z=br(V),[B,_,H]=Fw(V,z),[j,L]=Gv("textArea",y,i),F=OM(u??(E==null?void 0:E.allowClear));return B(d.createElement(YW,Object.assign({autoComplete:E==null?void 0:E.autoComplete},w,{style:Object.assign(Object.assign({},E==null?void 0:E.style),m),styles:Object.assign(Object.assign({},E==null?void 0:E.styles),b),disabled:$,allowClear:F,className:ie(H,z,h,v,E==null?void 0:E.className),classNames:Object.assign(Object.assign(Object.assign({},p),E==null?void 0:E.classNames),{textarea:ie({[`${V}-sm`]:k==="small",[`${V}-lg`]:k==="large"},_,p==null?void 0:p.textarea,(n=E==null?void 0:E.classNames)===null||n===void 0?void 0:n.textarea),variant:ie({[`${V}-${j}`]:L},od(V,R)),affixWrapper:ie(`${V}-textarea-affix-wrapper`,{[`${V}-affix-wrapper-rtl`]:S==="rtl",[`${V}-affix-wrapper-sm`]:k==="small",[`${V}-affix-wrapper-lg`]:k==="large",[`${V}-textarea-show-count`]:e.showCount||((r=e.count)===null||r===void 0?void 0:r.show)},_)}),prefixCls:V,suffix:M&&d.createElement("span",{className:`${V}-textarea-suffix`},P),ref:A})))}),Fo=rh;Fo.Group=SW;Fo.Search=_W;Fo.TextArea=TM;Fo.Password=HW;Fo.OTP=MW;function Tk(e){return["small","middle","large"].includes(e)}function Pk(e){return e?typeof e=="number"&&!Number.isNaN(e):!1}const PM=ue.createContext({latestIndex:0}),ZW=PM.Provider,JW=e=>{let{className:t,index:n,children:r,split:o,style:i}=e;const{latestIndex:a}=d.useContext(PM);return r==null?null:d.createElement(d.Fragment,null,d.createElement("div",{className:t,style:i},r),n{var n,r,o;const{getPrefixCls:i,space:a,direction:s}=d.useContext(ht),{size:c=(n=a==null?void 0:a.size)!==null&&n!==void 0?n:"small",align:u,className:p,rootClassName:v,children:h,direction:m="horizontal",prefixCls:b,split:y,style:w,wrap:C=!1,classNames:S,styles:E}=e,k=eU(e,["size","align","className","rootClassName","children","direction","prefixCls","split","style","wrap","classNames","styles"]),[O,$]=Array.isArray(c)?c:[c,c],T=Tk($),M=Tk(O),P=Pk($),R=Pk(O),A=lo(h,{keepEmpty:!0}),V=u===void 0&&m==="horizontal"?"center":u,z=i("space",b),[B,_,H]=nT(z),j=ie(z,a==null?void 0:a.className,_,`${z}-${m}`,{[`${z}-rtl`]:s==="rtl",[`${z}-align-${V}`]:V,[`${z}-gap-row-${$}`]:T,[`${z}-gap-col-${O}`]:M},p,v,H),L=ie(`${z}-item`,(r=S==null?void 0:S.item)!==null&&r!==void 0?r:(o=a==null?void 0:a.classNames)===null||o===void 0?void 0:o.item);let F=0;const U=A.map((G,q)=>{var J,Y;G!=null&&(F=q);const Q=(G==null?void 0:G.key)||`${L}-${q}`;return d.createElement(JW,{className:L,key:Q,index:q,split:y,style:(J=E==null?void 0:E.item)!==null&&J!==void 0?J:(Y=a==null?void 0:a.styles)===null||Y===void 0?void 0:Y.item},G)}),D=d.useMemo(()=>({latestIndex:F}),[F]);if(A.length===0)return null;const W={};return C&&(W.flexWrap="wrap"),!M&&R&&(W.columnGap=O),!T&&P&&(W.rowGap=$),B(d.createElement("div",Object.assign({ref:t,className:j,style:Object.assign(Object.assign(Object.assign({},W),a==null?void 0:a.style),w)},k),d.createElement(ZW,{value:D},U)))}),Ww=tU;Ww.Compact=G5;var nU=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{const{getPopupContainer:t,getPrefixCls:n,direction:r}=d.useContext(ht),{prefixCls:o,type:i="default",danger:a,disabled:s,loading:c,onClick:u,htmlType:p,children:v,className:h,menu:m,arrow:b,autoFocus:y,overlay:w,trigger:C,align:S,open:E,onOpenChange:k,placement:O,getPopupContainer:$,href:T,icon:M=d.createElement(FP,null),title:P,buttonsRender:R=te=>te,mouseEnterDelay:A,mouseLeaveDelay:V,overlayClassName:z,overlayStyle:B,destroyPopupOnHide:_,dropdownRender:H}=e,j=nU(e,["prefixCls","type","danger","disabled","loading","onClick","htmlType","children","className","menu","arrow","autoFocus","overlay","trigger","align","open","onOpenChange","placement","getPopupContainer","href","icon","title","buttonsRender","mouseEnterDelay","mouseLeaveDelay","overlayClassName","overlayStyle","destroyPopupOnHide","dropdownRender"]),L=n("dropdown",o),F=`${L}-button`,U={menu:m,arrow:b,autoFocus:y,align:S,disabled:s,trigger:s?[]:C,onOpenChange:k,getPopupContainer:$||t,mouseEnterDelay:A,mouseLeaveDelay:V,overlayClassName:z,overlayStyle:B,destroyPopupOnHide:_,dropdownRender:H},{compactSize:D,compactItemClassnames:W}=lc(L,r),G=ie(F,W,h);"overlay"in e&&(U.overlay=w),"open"in e&&(U.open=E),"placement"in e?U.placement=O:U.placement=r==="rtl"?"bottomLeft":"bottomRight";const q=d.createElement(jr,{type:i,danger:a,disabled:s,loading:c,onClick:u,htmlType:p,href:T,title:P},v),J=d.createElement(jr,{type:i,danger:a,icon:M}),[Y,Q]=R([q,J]);return d.createElement(Ww.Compact,Object.assign({className:G,size:D,block:!0},j),Y,d.createElement(eh,Object.assign({},U),Q))};MM.__ANT_BUTTON=!0;const Uw=eh;Uw.Button=MM;var rU={icon:{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M854.6 288.6L639.4 73.4c-6-6-14.1-9.4-22.6-9.4H192c-17.7 0-32 14.3-32 32v832c0 17.7 14.3 32 32 32h640c17.7 0 32-14.3 32-32V311.3c0-8.5-3.4-16.7-9.4-22.7zM790.2 326H602V137.8L790.2 326zm1.8 562H232V136h302v216a42 42 0 0042 42h216v494zM504 618H320c-4.4 0-8 3.6-8 8v48c0 4.4 3.6 8 8 8h184c4.4 0 8-3.6 8-8v-48c0-4.4-3.6-8-8-8zM312 490v48c0 4.4 3.6 8 8 8h384c4.4 0 8-3.6 8-8v-48c0-4.4-3.6-8-8-8H320c-4.4 0-8 3.6-8 8z"}}]},name:"file-text",theme:"outlined"},oU=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:rU}))},iU=d.forwardRef(oU);function sv(e){const[t,n]=d.useState(e);return d.useEffect(()=>{const r=setTimeout(()=>{n(e)},e.length?0:10);return()=>{clearTimeout(r)}},[e]),t}const aU=e=>{const{componentCls:t}=e,n=`${t}-show-help`,r=`${t}-show-help-item`;return{[n]:{transition:`opacity ${e.motionDurationSlow} ${e.motionEaseInOut}`,"&-appear, &-enter":{opacity:0,"&-active":{opacity:1}},"&-leave":{opacity:1,"&-active":{opacity:0}},[r]:{overflow:"hidden",transition:`height ${e.motionDurationSlow} ${e.motionEaseInOut}, - opacity ${e.motionDurationSlow} ${e.motionEaseInOut}, - transform ${e.motionDurationSlow} ${e.motionEaseInOut} !important`,[`&${r}-appear, &${r}-enter`]:{transform:"translateY(-5px)",opacity:0,"&-active":{transform:"translateY(0)",opacity:1}},[`&${r}-leave-active`]:{transform:"translateY(-5px)"}}}}},sU=e=>({legend:{display:"block",width:"100%",marginBottom:e.marginLG,padding:0,color:e.colorTextDescription,fontSize:e.fontSizeLG,lineHeight:"inherit",border:0,borderBottom:`${de(e.lineWidth)} ${e.lineType} ${e.colorBorder}`},'input[type="search"]':{boxSizing:"border-box"},'input[type="radio"], input[type="checkbox"]':{lineHeight:"normal"},'input[type="file"]':{display:"block"},'input[type="range"]':{display:"block",width:"100%"},"select[multiple], select[size]":{height:"auto"},"input[type='file']:focus,\n input[type='radio']:focus,\n input[type='checkbox']:focus":{outline:0,boxShadow:`0 0 0 ${de(e.controlOutlineWidth)} ${e.controlOutline}`},output:{display:"block",paddingTop:15,color:e.colorText,fontSize:e.fontSize,lineHeight:e.lineHeight}}),Mk=(e,t)=>{const{formItemCls:n}=e;return{[n]:{[`${n}-label > label`]:{height:t},[`${n}-control-input`]:{minHeight:t}}}},lU=e=>{const{componentCls:t}=e;return{[e.componentCls]:Object.assign(Object.assign(Object.assign({},jn(e)),sU(e)),{[`${t}-text`]:{display:"inline-block",paddingInlineEnd:e.paddingSM},"&-small":Object.assign({},Mk(e,e.controlHeightSM)),"&-large":Object.assign({},Mk(e,e.controlHeightLG))})}},cU=e=>{const{formItemCls:t,iconCls:n,componentCls:r,rootPrefixCls:o,antCls:i,labelRequiredMarkColor:a,labelColor:s,labelFontSize:c,labelHeight:u,labelColonMarginInlineStart:p,labelColonMarginInlineEnd:v,itemMarginBottom:h}=e;return{[t]:Object.assign(Object.assign({},jn(e)),{marginBottom:h,verticalAlign:"top","&-with-help":{transition:"none"},[`&-hidden, - &-hidden${i}-row`]:{display:"none"},"&-has-warning":{[`${t}-split`]:{color:e.colorError}},"&-has-error":{[`${t}-split`]:{color:e.colorWarning}},[`${t}-label`]:{flexGrow:0,overflow:"hidden",whiteSpace:"nowrap",textAlign:"end",verticalAlign:"middle","&-left":{textAlign:"start"},"&-wrap":{overflow:"unset",lineHeight:e.lineHeight,whiteSpace:"unset"},"> label":{position:"relative",display:"inline-flex",alignItems:"center",maxWidth:"100%",height:u,color:s,fontSize:c,[`> ${n}`]:{fontSize:e.fontSize,verticalAlign:"top"},[`&${t}-required:not(${t}-required-mark-optional)::before`]:{display:"inline-block",marginInlineEnd:e.marginXXS,color:a,fontSize:e.fontSize,fontFamily:"SimSun, sans-serif",lineHeight:1,content:'"*"',[`${r}-hide-required-mark &`]:{display:"none"}},[`${t}-optional`]:{display:"inline-block",marginInlineStart:e.marginXXS,color:e.colorTextDescription,[`${r}-hide-required-mark &`]:{display:"none"}},[`${t}-tooltip`]:{color:e.colorTextDescription,cursor:"help",writingMode:"horizontal-tb",marginInlineStart:e.marginXXS},"&::after":{content:'":"',position:"relative",marginBlock:0,marginInlineStart:p,marginInlineEnd:v},[`&${t}-no-colon::after`]:{content:'"\\a0"'}}},[`${t}-control`]:{"--ant-display":"flex",flexDirection:"column",flexGrow:1,[`&:first-child:not([class^="'${o}-col-'"]):not([class*="' ${o}-col-'"])`]:{width:"100%"},"&-input":{position:"relative",display:"flex",alignItems:"center",minHeight:e.controlHeight,"&-content":{flex:"auto",maxWidth:"100%"}}},[t]:{"&-explain, &-extra":{clear:"both",color:e.colorTextDescription,fontSize:e.fontSize,lineHeight:e.lineHeight},"&-explain-connected":{width:"100%"},"&-extra":{minHeight:e.controlHeightSM,transition:`color ${e.motionDurationMid} ${e.motionEaseOut}`},"&-explain":{"&-error":{color:e.colorError},"&-warning":{color:e.colorWarning}}},[`&-with-help ${t}-explain`]:{height:"auto",opacity:1},[`${t}-feedback-icon`]:{fontSize:e.fontSize,textAlign:"center",visibility:"visible",animationName:iw,animationDuration:e.motionDurationMid,animationTimingFunction:e.motionEaseOutBack,pointerEvents:"none","&-success":{color:e.colorSuccess},"&-error":{color:e.colorError},"&-warning":{color:e.colorWarning},"&-validating":{color:e.colorPrimary}}})}},Nk=(e,t)=>{const{formItemCls:n}=e;return{[`${t}-horizontal`]:{[`${n}-label`]:{flexGrow:0},[`${n}-control`]:{flex:"1 1 0",minWidth:0},[`${n}-label[class$='-24'], ${n}-label[class*='-24 ']`]:{[`& + ${n}-control`]:{minWidth:"unset"}}}}},uU=e=>{const{componentCls:t,formItemCls:n,inlineItemMarginBottom:r}=e;return{[`${t}-inline`]:{display:"flex",flexWrap:"wrap",[n]:{flex:"none",marginInlineEnd:e.margin,marginBottom:r,"&-row":{flexWrap:"nowrap"},[`> ${n}-label, - > ${n}-control`]:{display:"inline-block",verticalAlign:"top"},[`> ${n}-label`]:{flex:"none"},[`${t}-text`]:{display:"inline-block"},[`${n}-has-feedback`]:{display:"inline-block"}}}}},ci=e=>({padding:e.verticalLabelPadding,margin:e.verticalLabelMargin,whiteSpace:"initial",textAlign:"start","> label":{margin:0,"&::after":{visibility:"hidden"}}}),NM=e=>{const{componentCls:t,formItemCls:n,rootPrefixCls:r}=e;return{[`${n} ${n}-label`]:ci(e),[`${t}:not(${t}-inline)`]:{[n]:{flexWrap:"wrap",[`${n}-label, ${n}-control`]:{[`&:not([class*=" ${r}-col-xs"])`]:{flex:"0 0 100%",maxWidth:"100%"}}}}}},dU=e=>{const{componentCls:t,formItemCls:n,antCls:r}=e;return{[`${t}-vertical`]:{[`${n}:not(${n}-horizontal)`]:{[`${n}-row`]:{flexDirection:"column"},[`${n}-label > label`]:{height:"auto"},[`${n}-control`]:{width:"100%"},[`${n}-label, - ${r}-col-24${n}-label, - ${r}-col-xl-24${n}-label`]:ci(e)}},[`@media (max-width: ${de(e.screenXSMax)})`]:[NM(e),{[t]:{[`${n}:not(${n}-horizontal)`]:{[`${r}-col-xs-24${n}-label`]:ci(e)}}}],[`@media (max-width: ${de(e.screenSMMax)})`]:{[t]:{[`${n}:not(${n}-horizontal)`]:{[`${r}-col-sm-24${n}-label`]:ci(e)}}},[`@media (max-width: ${de(e.screenMDMax)})`]:{[t]:{[`${n}:not(${n}-horizontal)`]:{[`${r}-col-md-24${n}-label`]:ci(e)}}},[`@media (max-width: ${de(e.screenLGMax)})`]:{[t]:{[`${n}:not(${n}-horizontal)`]:{[`${r}-col-lg-24${n}-label`]:ci(e)}}}}},fU=e=>{const{formItemCls:t,antCls:n}=e;return{[`${t}-vertical`]:{[`${t}-row`]:{flexDirection:"column"},[`${t}-label > label`]:{height:"auto"},[`${t}-control`]:{width:"100%"}},[`${t}-vertical ${t}-label, - ${n}-col-24${t}-label, - ${n}-col-xl-24${t}-label`]:ci(e),[`@media (max-width: ${de(e.screenXSMax)})`]:[NM(e),{[t]:{[`${n}-col-xs-24${t}-label`]:ci(e)}}],[`@media (max-width: ${de(e.screenSMMax)})`]:{[t]:{[`${n}-col-sm-24${t}-label`]:ci(e)}},[`@media (max-width: ${de(e.screenMDMax)})`]:{[t]:{[`${n}-col-md-24${t}-label`]:ci(e)}},[`@media (max-width: ${de(e.screenLGMax)})`]:{[t]:{[`${n}-col-lg-24${t}-label`]:ci(e)}}}},pU=e=>({labelRequiredMarkColor:e.colorError,labelColor:e.colorTextHeading,labelFontSize:e.fontSize,labelHeight:e.controlHeight,labelColonMarginInlineStart:e.marginXXS/2,labelColonMarginInlineEnd:e.marginXS,itemMarginBottom:e.marginLG,verticalLabelPadding:`0 0 ${e.paddingXS}px`,verticalLabelMargin:0,inlineItemMarginBottom:0}),RM=(e,t)=>vn(e,{formItemCls:`${e.componentCls}-item`,rootPrefixCls:t}),Kw=In("Form",(e,t)=>{let{rootPrefixCls:n}=t;const r=RM(e,n);return[lU(r),cU(r),aU(r),Nk(r,r.componentCls),Nk(r,r.formItemCls),uU(r),dU(r),fU(r),zv(r),iw]},pU,{order:-1e3}),Rk=[];function zm(e,t,n){let r=arguments.length>3&&arguments[3]!==void 0?arguments[3]:0;return{key:typeof e=="string"?e:`${t}-${r}`,error:e,errorStatus:n}}const DM=e=>{let{help:t,helpStatus:n,errors:r=Rk,warnings:o=Rk,className:i,fieldId:a,onVisibleChanged:s}=e;const{prefixCls:c}=d.useContext(mw),u=`${c}-item-explain`,p=br(c),[v,h,m]=Kw(c,p),b=d.useMemo(()=>Ju(c),[c]),y=sv(r),w=sv(o),C=d.useMemo(()=>t!=null?[zm(t,"help",n)]:[].concat(Se(y.map((E,k)=>zm(E,"error","error",k))),Se(w.map((E,k)=>zm(E,"warning","warning",k)))),[t,n,y,w]),S={};return a&&(S.id=`${a}_help`),v(d.createElement(Xo,{motionDeadline:b.motionDeadline,motionName:`${c}-show-help`,visible:!!C.length,onVisibleChanged:s},E=>{const{className:k,style:O}=E;return d.createElement("div",Object.assign({},S,{className:ie(u,k,m,p,i,h),style:O,role:"alert"}),d.createElement(VI,Object.assign({keys:C},Ju(c),{motionName:`${c}-show-help-item`,component:!1}),$=>{const{key:T,error:M,errorStatus:P,className:R,style:A}=$;return d.createElement("div",{key:T,className:ie(R,{[`${u}-${P}`]:P}),style:A},M)}))}))},vU=["parentNode"],hU="form_item";function Ou(e){return e===void 0||e===!1?[]:Array.isArray(e)?e:[e]}function jM(e,t){if(!e.length)return;const n=e.join("_");return t?`${t}_${n}`:vU.includes(n)?`${hU}_${n}`:n}function LM(e,t,n,r,o,i){let a=r;return i!==void 0?a=i:n.validating?a="validating":e.length?a="error":t.length?a="warning":(n.touched||o&&n.validated)&&(a="success"),a}function Dk(e){return Ou(e).join("_")}function gU(e,t){const n=t.getFieldInstance(e),r=jy(n);if(r)return r;const o=jM(Ou(e),t.__INTERNAL__.name);if(o)return document.getElementById(o)}function BM(e){const[t]=gw(),n=d.useRef({}),r=d.useMemo(()=>e??Object.assign(Object.assign({},t),{__INTERNAL__:{itemRef:o=>i=>{const a=Dk(o);i?n.current[a]=i:delete n.current[a]}},scrollToField:function(o){let i=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{};const a=gU(o,r);a&&b5(a,Object.assign({scrollMode:"if-needed",block:"nearest"},i))},getFieldInstance:o=>{const i=Dk(o);return n.current[i]}}),[e,t]);return[r]}var mU=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{const n=d.useContext(So),{getPrefixCls:r,direction:o,form:i}=d.useContext(ht),{prefixCls:a,className:s,rootClassName:c,size:u,disabled:p=n,form:v,colon:h,labelAlign:m,labelWrap:b,labelCol:y,wrapperCol:w,hideRequiredMark:C,layout:S="horizontal",scrollToFirstError:E,requiredMark:k,onFinishFailed:O,name:$,style:T,feedbackIcons:M,variant:P}=e,R=mU(e,["prefixCls","className","rootClassName","size","disabled","form","colon","labelAlign","labelWrap","labelCol","wrapperCol","hideRequiredMark","layout","scrollToFirstError","requiredMark","onFinishFailed","name","style","feedbackIcons","variant"]),A=Go(u),V=d.useContext(vI),z=d.useMemo(()=>k!==void 0?k:C?!1:i&&i.requiredMark!==void 0?i.requiredMark:!0,[C,k,i]),B=h??(i==null?void 0:i.colon),_=r("form",a),H=br(_),[j,L,F]=Kw(_,H),U=ie(_,`${_}-${S}`,{[`${_}-hide-required-mark`]:z===!1,[`${_}-rtl`]:o==="rtl",[`${_}-${A}`]:A},F,H,L,i==null?void 0:i.className,s,c),[D]=BM(v),{__INTERNAL__:W}=D;W.name=$;const G=d.useMemo(()=>({name:$,labelAlign:m,labelCol:y,labelWrap:b,wrapperCol:w,vertical:S==="vertical",colon:B,requiredMark:z,itemRef:W.itemRef,form:D,feedbackIcons:M}),[$,m,y,w,S,B,z,D,M]),q=d.useRef(null);d.useImperativeHandle(t,()=>{var Q;return Object.assign(Object.assign({},D),{nativeElement:(Q=q.current)===null||Q===void 0?void 0:Q.nativeElement})});const J=(Q,te)=>{if(Q){let ce={block:"nearest"};typeof Q=="object"&&(ce=Q),D.scrollToField(te,ce)}},Y=Q=>{if(O==null||O(Q),Q.errorFields.length){const te=Q.errorFields[0].name;if(E!==void 0){J(E,te);return}i&&i.scrollToFirstError!==void 0&&J(i.scrollToFirstError,te)}};return j(d.createElement($T.Provider,{value:P},d.createElement(Gy,{disabled:p},d.createElement(Is.Provider,{value:A},d.createElement(OT,{validateMessages:V},d.createElement(oa.Provider,{value:G},d.createElement(cc,Object.assign({id:$},R,{name:$,onFinishFailed:Y,form:D,ref:q,style:Object.assign(Object.assign({},i==null?void 0:i.style),T),className:U}))))))))},yU=d.forwardRef(bU);function wU(e){if(typeof e=="function")return e;const t=lo(e);return t.length<=1?t[0]:t}const AM=()=>{const{status:e,errors:t=[],warnings:n=[]}=d.useContext(Vr);return{status:e,errors:t,warnings:n}};AM.Context=Vr;function xU(e){const[t,n]=d.useState(e),r=d.useRef(null),o=d.useRef([]),i=d.useRef(!1);d.useEffect(()=>(i.current=!1,()=>{i.current=!0,bn.cancel(r.current),r.current=null}),[]);function a(s){i.current||(r.current===null&&(o.current=[],r.current=bn(()=>{r.current=null,n(c=>{let u=c;return o.current.forEach(p=>{u=p(u)}),u})})),o.current.push(s))}return[t,a]}function SU(){const{itemRef:e}=d.useContext(oa),t=d.useRef({});function n(r,o){const i=o&&typeof o=="object"&&o.ref,a=r.join("_");return(t.current.name!==a||t.current.originRef!==i)&&(t.current.name=a,t.current.originRef=i,t.current.ref=Wr(e(r),i)),t.current.ref}return n}const CU=e=>{const{formItemCls:t}=e;return{"@media screen and (-ms-high-contrast: active), (-ms-high-contrast: none)":{[`${t}-control`]:{display:"flex"}}}},EU=ic(["Form","item-item"],(e,t)=>{let{rootPrefixCls:n}=t;const r=RM(e,n);return[CU(r)]}),kU=e=>{const{prefixCls:t,status:n,wrapperCol:r,children:o,errors:i,warnings:a,_internalItemRender:s,extra:c,help:u,fieldId:p,marginBottom:v,onErrorVisibleChanged:h}=e,m=`${t}-item`,b=d.useContext(oa),y=r||b.wrapperCol||{},w=ie(`${m}-control`,y.className),C=d.useMemo(()=>Object.assign({},b),[b]);delete C.labelCol,delete C.wrapperCol;const S=d.createElement("div",{className:`${m}-control-input`},d.createElement("div",{className:`${m}-control-input-content`},o)),E=d.useMemo(()=>({prefixCls:t,status:n}),[t,n]),k=v!==null||i.length||a.length?d.createElement("div",{style:{display:"flex",flexWrap:"nowrap"}},d.createElement(mw.Provider,{value:E},d.createElement(DM,{fieldId:p,errors:i,warnings:a,help:u,helpStatus:n,className:`${m}-explain-connected`,onVisibleChanged:h})),!!v&&d.createElement("div",{style:{width:0,height:v}})):null,O={};p&&(O.id=`${p}_extra`);const $=c?d.createElement("div",Object.assign({},O,{className:`${m}-extra`}),c):null,T=s&&s.mark==="pro_table_render"&&s.render?s.render(e,{input:S,errorList:k,extra:$}):d.createElement(d.Fragment,null,S,k,$);return d.createElement(oa.Provider,{value:C},d.createElement(iv,Object.assign({},y,{className:w}),T),d.createElement(EU,{prefixCls:t}))};var OU={icon:{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M512 64C264.6 64 64 264.6 64 512s200.6 448 448 448 448-200.6 448-448S759.4 64 512 64zm0 820c-205.4 0-372-166.6-372-372s166.6-372 372-372 372 166.6 372 372-166.6 372-372 372z"}},{tag:"path",attrs:{d:"M623.6 316.7C593.6 290.4 554 276 512 276s-81.6 14.5-111.6 40.7C369.2 344 352 380.7 352 420v7.6c0 4.4 3.6 8 8 8h48c4.4 0 8-3.6 8-8V420c0-44.1 43.1-80 96-80s96 35.9 96 80c0 31.1-22 59.6-56.1 72.7-21.2 8.1-39.2 22.3-52.1 40.9-13.1 19-19.9 41.8-19.9 64.9V620c0 4.4 3.6 8 8 8h48c4.4 0 8-3.6 8-8v-22.7a48.3 48.3 0 0130.9-44.8c59-22.7 97.1-74.7 97.1-132.5.1-39.3-17.1-76-48.3-103.3zM472 732a40 40 0 1080 0 40 40 0 10-80 0z"}}]},name:"question-circle",theme:"outlined"},$U=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:OU}))},IU=d.forwardRef($U),TU=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{let{prefixCls:t,label:n,htmlFor:r,labelCol:o,labelAlign:i,colon:a,required:s,requiredMark:c,tooltip:u,vertical:p}=e;var v;const[h]=bi("Form"),{labelAlign:m,labelCol:b,labelWrap:y,colon:w}=d.useContext(oa);if(!n)return null;const C=o||b||{},S=i||m,E=`${t}-item-label`,k=ie(E,S==="left"&&`${E}-left`,C.className,{[`${E}-wrap`]:!!y});let O=n;const $=a===!0||w!==!1&&a!==!1;$&&!p&&typeof n=="string"&&n.trim()&&(O=n.replace(/[:|:]\s*$/,""));const M=PU(u);if(M){const{icon:V=d.createElement(IU,null)}=M,z=TU(M,["icon"]),B=d.createElement(gi,Object.assign({},z),d.cloneElement(V,{className:`${t}-item-tooltip`,title:"",onClick:_=>{_.preventDefault()},tabIndex:null}));O=d.createElement(d.Fragment,null,O,B)}const P=c==="optional",R=typeof c=="function";R?O=c(O,{required:!!s}):P&&!s&&(O=d.createElement(d.Fragment,null,O,d.createElement("span",{className:`${t}-item-optional`,title:""},(h==null?void 0:h.optional)||((v=hi.Form)===null||v===void 0?void 0:v.optional))));const A=ie({[`${t}-item-required`]:s,[`${t}-item-required-mark-optional`]:P||R,[`${t}-item-no-colon`]:!$});return d.createElement(iv,Object.assign({},C,{className:k}),d.createElement("label",{htmlFor:r,className:A,title:typeof n=="string"?n:""},O))},NU={success:Jy,warning:Nv,error:bd,validating:Xa};function zM(e){let{children:t,errors:n,warnings:r,hasFeedback:o,validateStatus:i,prefixCls:a,meta:s,noStyle:c}=e;const u=`${a}-item`,{feedbackIcons:p}=d.useContext(oa),v=LM(n,r,s,null,!!o,i),{isFormItemInput:h,status:m,hasFeedback:b,feedbackIcon:y}=d.useContext(Vr),w=d.useMemo(()=>{var C;let S;if(o){const k=o!==!0&&o.icons||p,O=v&&((C=k==null?void 0:k({status:v,errors:n,warnings:r}))===null||C===void 0?void 0:C[v]),$=v&&NU[v];S=O!==!1&&$?d.createElement("span",{className:ie(`${u}-feedback-icon`,`${u}-feedback-icon-${v}`)},O||d.createElement($,null)):null}const E={status:v||"",errors:n,warnings:r,hasFeedback:!!o,feedbackIcon:S,isFormItemInput:!0};return c&&(E.status=(v??m)||"",E.isFormItemInput=h,E.hasFeedback=!!(o??b),E.feedbackIcon=o!==void 0?E.feedbackIcon:y),E},[v,o,c,h,m]);return d.createElement(Vr.Provider,{value:w},t)}var RU=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{if(A&&T.current){const F=getComputedStyle(T.current);B(parseInt(F.marginBottom,10))}},[A,V]);const _=F=>{F||B(null)},j=function(){let F=arguments.length>0&&arguments[0]!==void 0?arguments[0]:!1;const U=F?M:u.errors,D=F?P:u.warnings;return LM(U,D,u,"",!!p,c)}(),L=ie(E,n,r,{[`${E}-with-help`]:R||M.length||P.length,[`${E}-has-feedback`]:j&&p,[`${E}-has-success`]:j==="success",[`${E}-has-warning`]:j==="warning",[`${E}-has-error`]:j==="error",[`${E}-is-validating`]:j==="validating",[`${E}-hidden`]:v,[`${E}-${C}`]:C});return d.createElement("div",{className:L,style:o,ref:T},d.createElement(CM,Object.assign({className:`${E}-row`},Ln(S,["_internalItemRender","colon","dependencies","extra","fieldKey","getValueFromEvent","getValueProps","htmlFor","id","initialValue","isListField","label","labelAlign","labelCol","labelWrap","messageVariables","name","normalize","noStyle","preserve","requiredMark","rules","shouldUpdate","trigger","tooltip","validateFirst","validateTrigger","valuePropName","wrapperCol","validateDebounce"])),d.createElement(MU,Object.assign({htmlFor:m},e,{requiredMark:k,required:b??y,prefixCls:t,vertical:$})),d.createElement(kU,Object.assign({},e,u,{errors:M,warnings:P,prefixCls:t,status:j,help:i,marginBottom:z,onErrorVisibleChanged:_}),d.createElement(kT.Provider,{value:w},d.createElement(zM,{prefixCls:t,meta:u,errors:u.errors,warnings:u.warnings,hasFeedback:p,validateStatus:j},h)))),!!z&&d.createElement("div",{className:`${E}-margin-offset`,style:{marginBottom:-z}}))}const jU="__SPLIT__";function LU(e,t){const n=Object.keys(e),r=Object.keys(t);return n.length===r.length&&n.every(o=>{const i=e[o],a=t[o];return i===a||typeof i=="function"||typeof a=="function"})}const BU=d.memo(e=>{let{children:t}=e;return t},(e,t)=>LU(e.control,t.control)&&e.update===t.update&&e.childProps.length===t.childProps.length&&e.childProps.every((n,r)=>n===t.childProps[r]));function jk(){return{errors:[],warnings:[],touched:!1,validating:!1,name:[],validated:!1}}function AU(e){const{name:t,noStyle:n,className:r,dependencies:o,prefixCls:i,shouldUpdate:a,rules:s,children:c,required:u,label:p,messageVariables:v,trigger:h="onChange",validateTrigger:m,hidden:b,help:y,layout:w}=e,{getPrefixCls:C}=d.useContext(ht),{name:S}=d.useContext(oa),E=wU(c),k=typeof E=="function",O=d.useContext(kT),{validateTrigger:$}=d.useContext(Ms),T=m!==void 0?m:$,M=t!=null,P=C("form",i),R=br(P),[A,V,z]=Kw(P,R);As();const B=d.useContext(td),_=d.useRef(),[H,j]=xU({}),[L,F]=Ts(()=>jk()),U=Q=>{const te=B==null?void 0:B.getKey(Q.name);if(F(Q.destroy?jk():Q,!0),n&&y!==!1&&O){let ce=Q.name;if(Q.destroy)ce=_.current||ce;else if(te!==void 0){const[se,ne]=te;ce=[se].concat(Se(ne)),_.current=ce}O(Q,ce)}},D=(Q,te)=>{j(ce=>{const se=Object.assign({},ce),ae=[].concat(Se(Q.name.slice(0,-1)),Se(te)).join(jU);return Q.destroy?delete se[ae]:se[ae]=Q,se})},[W,G]=d.useMemo(()=>{const Q=Se(L.errors),te=Se(L.warnings);return Object.values(H).forEach(ce=>{Q.push.apply(Q,Se(ce.errors||[])),te.push.apply(te,Se(ce.warnings||[]))}),[Q,te]},[H,L.errors,L.warnings]),q=SU();function J(Q,te,ce){return n&&!b?d.createElement(zM,{prefixCls:P,hasFeedback:e.hasFeedback,validateStatus:e.validateStatus,meta:L,errors:W,warnings:G,noStyle:!0},Q):d.createElement(DU,Object.assign({key:"row"},e,{className:ie(r,z,R,V),prefixCls:P,fieldId:te,isRequired:ce,errors:W,warnings:G,meta:L,onSubItemMetaChange:D,layout:w}),Q)}if(!M&&!k&&!o)return A(J(E));let Y={};return typeof p=="string"?Y.label=p:t&&(Y.label=String(t)),v&&(Y=Object.assign(Object.assign({},Y),v)),A(d.createElement(hw,Object.assign({},e,{messageVariables:Y,trigger:h,validateTrigger:T,onMetaChange:U}),(Q,te,ce)=>{const se=Ou(t).length&&te?te.name:[],ne=jM(se,S),ae=u!==void 0?u:!!(s!=null&&s.some(le=>{if(le&&typeof le=="object"&&le.required&&!le.warningOnly)return!0;if(typeof le=="function"){const pe=le(ce);return(pe==null?void 0:pe.required)&&!(pe!=null&&pe.warningOnly)}return!1})),ee=Object.assign({},Q);let re=null;if(Array.isArray(E)&&M)re=E;else if(!(k&&(!(a||o)||M))){if(!(o&&!k&&!M))if(d.isValidElement(E)){const le=Object.assign(Object.assign({},E.props),ee);if(le.id||(le.id=ne),y||W.length>0||G.length>0||e.extra){const ge=[];(y||W.length>0)&&ge.push(`${ne}_help`),e.extra&&ge.push(`${ne}_extra`),le["aria-describedby"]=ge.join(" ")}W.length>0&&(le["aria-invalid"]="true"),ae&&(le["aria-required"]="true"),vi(E)&&(le.ref=q(se,E)),new Set([].concat(Se(Ou(h)),Se(Ou(T)))).forEach(ge=>{le[ge]=function(){for(var Re,ye,Te,Ae,me,Ie=arguments.length,Le=new Array(Ie),Be=0;Be{var{prefixCls:t,children:n}=e,r=zU(e,["prefixCls","children"]);const{getPrefixCls:o}=d.useContext(ht),i=o("form",t),a=d.useMemo(()=>({prefixCls:i,status:"error"}),[i]);return d.createElement(xT,Object.assign({},r),(s,c,u)=>d.createElement(mw.Provider,{value:a},n(s.map(p=>Object.assign(Object.assign({},p),{fieldKey:p.key})),c,{errors:u.errors,warnings:u.warnings})))};function FU(){const{form:e}=d.useContext(oa);return e}const Or=yU;Or.Item=HM;Or.List=HU;Or.ErrorList=DM;Or.useForm=BM;Or.useFormInstance=FU;Or.useWatch=ET;Or.Provider=OT;Or.create=()=>{};function Lk(e){var t=e.getBoundingClientRect(),n=document.documentElement;return{left:t.left+(window.pageXOffset||n.scrollLeft)-(n.clientLeft||document.body.clientLeft||0),top:t.top+(window.pageYOffset||n.scrollTop)-(n.clientTop||document.body.clientTop||0)}}function ep(e,t,n,r){var o=Hu.unstable_batchedUpdates?function(a){Hu.unstable_batchedUpdates(n,a)}:n;return e!=null&&e.addEventListener&&e.addEventListener(t,o,r),{remove:function(){e!=null&&e.removeEventListener&&e.removeEventListener(t,o,r)}}}const _U=function(){const e=Object.assign({},arguments.length<=0?void 0:arguments[0]);for(let t=1;t{const o=n[r];o!==void 0&&(e[r]=o)})}return e};var VU={icon:{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M272.9 512l265.4-339.1c4.1-5.2.4-12.9-6.3-12.9h-77.3c-4.9 0-9.6 2.3-12.6 6.1L186.8 492.3a31.99 31.99 0 000 39.5l255.3 326.1c3 3.9 7.7 6.1 12.6 6.1H532c6.7 0 10.4-7.7 6.3-12.9L272.9 512zm304 0l265.4-339.1c4.1-5.2.4-12.9-6.3-12.9h-77.3c-4.9 0-9.6 2.3-12.6 6.1L490.8 492.3a31.99 31.99 0 000 39.5l255.3 326.1c3 3.9 7.7 6.1 12.6 6.1H836c6.7 0 10.4-7.7 6.3-12.9L576.9 512z"}}]},name:"double-left",theme:"outlined"},WU=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:VU}))},Bk=d.forwardRef(WU),UU={icon:{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M533.2 492.3L277.9 166.1c-3-3.9-7.7-6.1-12.6-6.1H188c-6.7 0-10.4 7.7-6.3 12.9L447.1 512 181.7 851.1A7.98 7.98 0 00188 864h77.3c4.9 0 9.6-2.3 12.6-6.1l255.3-326.1c9.1-11.7 9.1-27.9 0-39.5zm304 0L581.9 166.1c-3-3.9-7.7-6.1-12.6-6.1H492c-6.7 0-10.4 7.7-6.3 12.9L751.1 512 485.7 851.1A7.98 7.98 0 00492 864h77.3c4.9 0 9.6-2.3 12.6-6.1l255.3-326.1c9.1-11.7 9.1-27.9 0-39.5z"}}]},name:"double-right",theme:"outlined"},KU=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:UU}))},Ak=d.forwardRef(KU),qU={items_per_page:"条/页",jump_to:"跳至",jump_to_confirm:"确定",page:"页",prev_page:"上一页",next_page:"下一页",prev_5:"向前 5 页",next_5:"向后 5 页",prev_3:"向前 3 页",next_3:"向后 3 页",page_size:"页码"},XU=["10","20","50","100"],GU=function(t){var n=t.pageSizeOptions,r=n===void 0?XU:n,o=t.locale,i=t.changeSize,a=t.pageSize,s=t.goButton,c=t.quickGo,u=t.rootPrefixCls,p=t.selectComponentClass,v=t.selectPrefixCls,h=t.disabled,m=t.buildOptionText,b=t.showSizeChanger,y=ue.useState(""),w=ve(y,2),C=w[0],S=w[1],E=function(){return!C||Number.isNaN(C)?void 0:Number(C)},k=typeof m=="function"?m:function(L){return"".concat(L," ").concat(o.items_per_page)},O=function(F,U){if(i==null||i(Number(F)),st(b)==="object"){var D;(D=b.onChange)===null||D===void 0||D.call(b,F,U)}},$=function(F){S(F.target.value)},T=function(F){s||C===""||(S(""),!(F.relatedTarget&&(F.relatedTarget.className.indexOf("".concat(u,"-item-link"))>=0||F.relatedTarget.className.indexOf("".concat(u,"-item"))>=0))&&(c==null||c(E())))},M=function(F){C!==""&&(F.keyCode===De.ENTER||F.type==="click")&&(S(""),c==null||c(E()))},P=function(){return r.some(function(F){return F.toString()===a.toString()})?r:r.concat([a.toString()]).sort(function(F,U){var D=Number.isNaN(Number(F))?0:Number(F),W=Number.isNaN(Number(U))?0:Number(U);return D-W})},R="".concat(u,"-options");if(!b&&!c)return null;var A=null,V=null,z=null;if(b&&p){var B=st(b)==="object"?b:{},_=B.options,H=B.className,j=_?void 0:P().map(function(L,F){return ue.createElement(p.Option,{key:F,value:L.toString()},k(L))});A=ue.createElement(p,$e({disabled:h,prefixCls:v,showSearch:!1,optionLabelProp:_?"label":"children",popupMatchSelectWidth:!1,value:(a||r[0]).toString(),getPopupContainer:function(F){return F.parentNode},"aria-label":o.page_size,defaultOpen:!1},st(b)==="object"?b:null,{className:ie("".concat(R,"-size-changer"),H),options:_,onChange:O}),j)}return c&&(s&&(z=typeof s=="boolean"?ue.createElement("button",{type:"button",onClick:M,onKeyUp:M,disabled:h,className:"".concat(R,"-quick-jumper-button")},o.jump_to_confirm):ue.createElement("span",{onClick:M,onKeyUp:M},s)),V=ue.createElement("div",{className:"".concat(R,"-quick-jumper")},o.jump_to,ue.createElement("input",{disabled:h,type:"text",value:C,onChange:$,onKeyUp:M,onBlur:T,"aria-label":o.page}),o.page,z)),ue.createElement("li",{className:R},A,V)},ru=function(t){var n=t.rootPrefixCls,r=t.page,o=t.active,i=t.className,a=t.showTitle,s=t.onClick,c=t.onKeyPress,u=t.itemRender,p="".concat(n,"-item"),v=ie(p,"".concat(p,"-").concat(r),K(K({},"".concat(p,"-active"),o),"".concat(p,"-disabled"),!r),i),h=function(){s(r)},m=function(w){c(w,s,r)},b=u(r,"page",ue.createElement("a",{rel:"nofollow"},r));return b?ue.createElement("li",{title:a?String(r):null,className:v,onClick:h,onKeyDown:m,tabIndex:0},b):null},YU=function(t,n,r){return r};function zk(){}function Hk(e){var t=Number(e);return typeof t=="number"&&!Number.isNaN(t)&&isFinite(t)&&Math.floor(t)===t}function ps(e,t,n){var r=typeof e>"u"?t:e;return Math.floor((n-1)/r)+1}var QU=function(t){var n=t.prefixCls,r=n===void 0?"rc-pagination":n,o=t.selectPrefixCls,i=o===void 0?"rc-select":o,a=t.className,s=t.selectComponentClass,c=t.current,u=t.defaultCurrent,p=u===void 0?1:u,v=t.total,h=v===void 0?0:v,m=t.pageSize,b=t.defaultPageSize,y=b===void 0?10:b,w=t.onChange,C=w===void 0?zk:w,S=t.hideOnSinglePage,E=t.align,k=t.showPrevNextJumpers,O=k===void 0?!0:k,$=t.showQuickJumper,T=t.showLessItems,M=t.showTitle,P=M===void 0?!0:M,R=t.onShowSizeChange,A=R===void 0?zk:R,V=t.locale,z=V===void 0?qU:V,B=t.style,_=t.totalBoundaryShowSizeChanger,H=_===void 0?50:_,j=t.disabled,L=t.simple,F=t.showTotal,U=t.showSizeChanger,D=U===void 0?h>H:U,W=t.pageSizeOptions,G=t.itemRender,q=G===void 0?YU:G,J=t.jumpPrevIcon,Y=t.jumpNextIcon,Q=t.prevIcon,te=t.nextIcon,ce=ue.useRef(null),se=Dn(10,{value:m,defaultValue:y}),ne=ve(se,2),ae=ne[0],ee=ne[1],re=Dn(1,{value:c,defaultValue:p,postState:function(Bt){return Math.max(1,Math.min(Bt,ps(void 0,ae,h)))}}),le=ve(re,2),pe=le[0],Oe=le[1],ge=ue.useState(pe),Re=ve(ge,2),ye=Re[0],Te=Re[1];d.useEffect(function(){Te(pe)},[pe]);var Ae=Math.max(1,pe-(T?3:5)),me=Math.min(ps(void 0,ae,h),pe+(T?3:5));function Ie(mt,Bt){var Zt=mt||ue.createElement("button",{type:"button","aria-label":Bt,className:"".concat(r,"-item-link")});return typeof mt=="function"&&(Zt=ue.createElement(mt,Z({},t))),Zt}function Le(mt){var Bt=mt.target.value,Zt=ps(void 0,ae,h),hn;return Bt===""?hn=Bt:Number.isNaN(Number(Bt))?hn=ye:Bt>=Zt?hn=Zt:hn=Number(Bt),hn}function Be(mt){return Hk(mt)&&mt!==pe&&Hk(h)&&h>0}var et=h>ae?$:!1;function rt(mt){(mt.keyCode===De.UP||mt.keyCode===De.DOWN)&&mt.preventDefault()}function Ze(mt){var Bt=Le(mt);switch(Bt!==ye&&Te(Bt),mt.keyCode){case De.ENTER:Ge(Bt);break;case De.UP:Ge(Bt-1);break;case De.DOWN:Ge(Bt+1);break}}function Ve(mt){Ge(Le(mt))}function Ye(mt){var Bt=ps(mt,ae,h),Zt=pe>Bt&&Bt!==0?Bt:pe;ee(mt),Te(Zt),A==null||A(pe,mt),Oe(Zt),C==null||C(Zt,mt)}function Ge(mt){if(Be(mt)&&!j){var Bt=ps(void 0,ae,h),Zt=mt;return mt>Bt?Zt=Bt:mt<1&&(Zt=1),Zt!==ye&&Te(Zt),Oe(Zt),C==null||C(Zt,ae),Zt}return pe}var Fe=pe>1,we=pe2?Zt-2:0),dr=2;drh?h:pe*ae])),Et=null,wt=ps(void 0,ae,h);if(S&&h<=ae)return null;var _e=[],qe={rootPrefixCls:r,onClick:Ge,onKeyPress:St,showTitle:P,itemRender:q,page:-1},ot=pe-1>0?pe-1:0,at=pe+1=$t*2&&pe!==3&&(_e[0]=ue.cloneElement(_e[0],{className:ie("".concat(r,"-item-after-jump-prev"),_e[0].props.className)}),_e.unshift(Je)),wt-pe>=$t*2&&pe!==wt-2){var Qt=_e[_e.length-1];_e[_e.length-1]=ue.cloneElement(Qt,{className:ie("".concat(r,"-item-before-jump-next"),Qt.props.className)}),_e.push(Et)}ut!==1&&_e.unshift(ue.createElement(ru,$e({},qe,{key:1,page:1}))),lt!==wt&&_e.push(ue.createElement(ru,$e({},qe,{key:wt,page:wt})))}var dn=Pt(ot);if(dn){var tn=!Fe||!wt;dn=ue.createElement("li",{title:P?z.prev_page:null,onClick:ze,tabIndex:tn?null:0,onKeyDown:Ft,className:ie("".concat(r,"-prev"),K({},"".concat(r,"-disabled"),tn)),"aria-disabled":tn},dn)}var Sn=Gt(at);if(Sn){var Xn,or;L?(Xn=!we,or=Fe?0:null):(Xn=!we||!wt,or=Xn?null:0),Sn=ue.createElement("li",{title:P?z.next_page:null,onClick:Me,tabIndex:or,onKeyDown:Lt,className:ie("".concat(r,"-next"),K({},"".concat(r,"-disabled"),Xn)),"aria-disabled":Xn},Sn)}var tr=ie(r,a,K(K(K(K(K({},"".concat(r,"-start"),E==="start"),"".concat(r,"-center"),E==="center"),"".concat(r,"-end"),E==="end"),"".concat(r,"-simple"),L),"".concat(r,"-disabled"),j));return ue.createElement("ul",$e({className:tr,style:B,ref:ce},He),We,dn,L?dt:_e,Sn,ue.createElement(GU,{locale:z,rootPrefixCls:r,disabled:j,selectComponentClass:s,selectPrefixCls:i,changeSize:Ye,pageSize:ae,pageSizeOptions:W,quickGo:et?Ge:null,goButton:pt,showSizeChanger:D}))};const FM=e=>d.createElement(yi,Object.assign({},e,{showSearch:!0,size:"small"})),_M=e=>d.createElement(yi,Object.assign({},e,{showSearch:!0,size:"middle"}));FM.Option=yi.Option;_M.Option=yi.Option;const ZU=e=>{const{componentCls:t}=e;return{[`${t}-disabled`]:{"&, &:hover":{cursor:"not-allowed",[`${t}-item-link`]:{color:e.colorTextDisabled,cursor:"not-allowed"}},"&:focus-visible":{cursor:"not-allowed",[`${t}-item-link`]:{color:e.colorTextDisabled,cursor:"not-allowed"}}},[`&${t}-disabled`]:{cursor:"not-allowed",[`${t}-item`]:{cursor:"not-allowed","&:hover, &:active":{backgroundColor:"transparent"},a:{color:e.colorTextDisabled,backgroundColor:"transparent",border:"none",cursor:"not-allowed"},"&-active":{borderColor:e.colorBorder,backgroundColor:e.itemActiveBgDisabled,"&:hover, &:active":{backgroundColor:e.itemActiveBgDisabled},a:{color:e.itemActiveColorDisabled}}},[`${t}-item-link`]:{color:e.colorTextDisabled,cursor:"not-allowed","&:hover, &:active":{backgroundColor:"transparent"},[`${t}-simple&`]:{backgroundColor:"transparent","&:hover, &:active":{backgroundColor:"transparent"}}},[`${t}-simple-pager`]:{color:e.colorTextDisabled},[`${t}-jump-prev, ${t}-jump-next`]:{[`${t}-item-link-icon`]:{opacity:0},[`${t}-item-ellipsis`]:{opacity:1}}},[`&${t}-simple`]:{[`${t}-prev, ${t}-next`]:{[`&${t}-disabled ${t}-item-link`]:{"&:hover, &:active":{backgroundColor:"transparent"}}}}}},JU=e=>{const{componentCls:t}=e;return{[`&${t}-mini ${t}-total-text, &${t}-mini ${t}-simple-pager`]:{height:e.itemSizeSM,lineHeight:de(e.itemSizeSM)},[`&${t}-mini ${t}-item`]:{minWidth:e.itemSizeSM,height:e.itemSizeSM,margin:0,lineHeight:de(e.calc(e.itemSizeSM).sub(2).equal())},[`&${t}-mini:not(${t}-disabled) ${t}-item:not(${t}-item-active)`]:{backgroundColor:"transparent",borderColor:"transparent","&:hover":{backgroundColor:e.colorBgTextHover},"&:active":{backgroundColor:e.colorBgTextActive}},[`&${t}-mini ${t}-prev, &${t}-mini ${t}-next`]:{minWidth:e.itemSizeSM,height:e.itemSizeSM,margin:0,lineHeight:de(e.itemSizeSM)},[`&${t}-mini:not(${t}-disabled)`]:{[`${t}-prev, ${t}-next`]:{[`&:hover ${t}-item-link`]:{backgroundColor:e.colorBgTextHover},[`&:active ${t}-item-link`]:{backgroundColor:e.colorBgTextActive},[`&${t}-disabled:hover ${t}-item-link`]:{backgroundColor:"transparent"}}},[` - &${t}-mini ${t}-prev ${t}-item-link, - &${t}-mini ${t}-next ${t}-item-link - `]:{backgroundColor:"transparent",borderColor:"transparent","&::after":{height:e.itemSizeSM,lineHeight:de(e.itemSizeSM)}},[`&${t}-mini ${t}-jump-prev, &${t}-mini ${t}-jump-next`]:{height:e.itemSizeSM,marginInlineEnd:0,lineHeight:de(e.itemSizeSM)},[`&${t}-mini ${t}-options`]:{marginInlineStart:e.paginationMiniOptionsMarginInlineStart,"&-size-changer":{top:e.miniOptionsSizeChangerTop},"&-quick-jumper":{height:e.itemSizeSM,lineHeight:de(e.itemSizeSM),input:Object.assign(Object.assign({},zw(e)),{width:e.paginationMiniQuickJumperInputWidth,height:e.controlHeightSM})}}}},eK=e=>{const{componentCls:t}=e;return{[` - &${t}-simple ${t}-prev, - &${t}-simple ${t}-next - `]:{height:e.itemSizeSM,lineHeight:de(e.itemSizeSM),verticalAlign:"top",[`${t}-item-link`]:{height:e.itemSizeSM,backgroundColor:"transparent",border:0,"&:hover":{backgroundColor:e.colorBgTextHover},"&:active":{backgroundColor:e.colorBgTextActive},"&::after":{height:e.itemSizeSM,lineHeight:de(e.itemSizeSM)}}},[`&${t}-simple ${t}-simple-pager`]:{display:"inline-block",height:e.itemSizeSM,marginInlineEnd:e.marginXS,input:{boxSizing:"border-box",height:"100%",padding:`0 ${de(e.paginationItemPaddingInline)}`,textAlign:"center",backgroundColor:e.itemInputBg,border:`${de(e.lineWidth)} ${e.lineType} ${e.colorBorder}`,borderRadius:e.borderRadius,outline:"none",transition:`border-color ${e.motionDurationMid}`,color:"inherit","&:hover":{borderColor:e.colorPrimary},"&:focus":{borderColor:e.colorPrimaryHover,boxShadow:`${de(e.inputOutlineOffset)} 0 ${de(e.controlOutlineWidth)} ${e.controlOutline}`},"&[disabled]":{color:e.colorTextDisabled,backgroundColor:e.colorBgContainerDisabled,borderColor:e.colorBorder,cursor:"not-allowed"}}}}},tK=e=>{const{componentCls:t}=e;return{[`${t}-jump-prev, ${t}-jump-next`]:{outline:0,[`${t}-item-container`]:{position:"relative",[`${t}-item-link-icon`]:{color:e.colorPrimary,fontSize:e.fontSizeSM,opacity:0,transition:`all ${e.motionDurationMid}`,"&-svg":{top:0,insetInlineEnd:0,bottom:0,insetInlineStart:0,margin:"auto"}},[`${t}-item-ellipsis`]:{position:"absolute",top:0,insetInlineEnd:0,bottom:0,insetInlineStart:0,display:"block",margin:"auto",color:e.colorTextDisabled,letterSpacing:e.paginationEllipsisLetterSpacing,textAlign:"center",textIndent:e.paginationEllipsisTextIndent,opacity:1,transition:`all ${e.motionDurationMid}`}},"&:hover":{[`${t}-item-link-icon`]:{opacity:1},[`${t}-item-ellipsis`]:{opacity:0}}},[` - ${t}-prev, - ${t}-jump-prev, - ${t}-jump-next - `]:{marginInlineEnd:e.marginXS},[` - ${t}-prev, - ${t}-next, - ${t}-jump-prev, - ${t}-jump-next - `]:{display:"inline-block",minWidth:e.itemSize,height:e.itemSize,color:e.colorText,fontFamily:e.fontFamily,lineHeight:de(e.itemSize),textAlign:"center",verticalAlign:"middle",listStyle:"none",borderRadius:e.borderRadius,cursor:"pointer",transition:`all ${e.motionDurationMid}`},[`${t}-prev, ${t}-next`]:{outline:0,button:{color:e.colorText,cursor:"pointer",userSelect:"none"},[`${t}-item-link`]:{display:"block",width:"100%",height:"100%",padding:0,fontSize:e.fontSizeSM,textAlign:"center",backgroundColor:"transparent",border:`${de(e.lineWidth)} ${e.lineType} transparent`,borderRadius:e.borderRadius,outline:"none",transition:`all ${e.motionDurationMid}`},[`&:hover ${t}-item-link`]:{backgroundColor:e.colorBgTextHover},[`&:active ${t}-item-link`]:{backgroundColor:e.colorBgTextActive},[`&${t}-disabled:hover`]:{[`${t}-item-link`]:{backgroundColor:"transparent"}}},[`${t}-slash`]:{marginInlineEnd:e.paginationSlashMarginInlineEnd,marginInlineStart:e.paginationSlashMarginInlineStart},[`${t}-options`]:{display:"inline-block",marginInlineStart:e.margin,verticalAlign:"middle","&-size-changer":{display:"inline-block",width:"auto"},"&-quick-jumper":{display:"inline-block",height:e.controlHeight,marginInlineStart:e.marginXS,lineHeight:de(e.controlHeight),verticalAlign:"top",input:Object.assign(Object.assign(Object.assign({},Hw(e)),Aw(e,{borderColor:e.colorBorder,hoverBorderColor:e.colorPrimaryHover,activeBorderColor:e.colorPrimary,activeShadow:e.activeShadow})),{"&[disabled]":Object.assign({},th(e)),width:e.calc(e.controlHeightLG).mul(1.25).equal(),height:e.controlHeight,boxSizing:"border-box",margin:0,marginInlineStart:e.marginXS,marginInlineEnd:e.marginXS})}}}},nK=e=>{const{componentCls:t}=e;return{[`${t}-item`]:{display:"inline-block",minWidth:e.itemSize,height:e.itemSize,marginInlineEnd:e.marginXS,fontFamily:e.fontFamily,lineHeight:de(e.calc(e.itemSize).sub(2).equal()),textAlign:"center",verticalAlign:"middle",listStyle:"none",backgroundColor:e.itemBg,border:`${de(e.lineWidth)} ${e.lineType} transparent`,borderRadius:e.borderRadius,outline:0,cursor:"pointer",userSelect:"none",a:{display:"block",padding:`0 ${de(e.paginationItemPaddingInline)}`,color:e.colorText,"&:hover":{textDecoration:"none"}},[`&:not(${t}-item-active)`]:{"&:hover":{transition:`all ${e.motionDurationMid}`,backgroundColor:e.colorBgTextHover},"&:active":{backgroundColor:e.colorBgTextActive}},"&-active":{fontWeight:e.fontWeightStrong,backgroundColor:e.itemActiveBg,borderColor:e.colorPrimary,a:{color:e.colorPrimary},"&:hover":{borderColor:e.colorPrimaryHover},"&:hover a":{color:e.colorPrimaryHover}}}}},rK=e=>{const{componentCls:t}=e;return{[t]:Object.assign(Object.assign(Object.assign(Object.assign(Object.assign(Object.assign(Object.assign(Object.assign({},jn(e)),{display:"flex","&-start":{justifyContent:"start"},"&-center":{justifyContent:"center"},"&-end":{justifyContent:"end"},"ul, ol":{margin:0,padding:0,listStyle:"none"},"&::after":{display:"block",clear:"both",height:0,overflow:"hidden",visibility:"hidden",content:'""'},[`${t}-total-text`]:{display:"inline-block",height:e.itemSize,marginInlineEnd:e.marginXS,lineHeight:de(e.calc(e.itemSize).sub(2).equal()),verticalAlign:"middle"}}),nK(e)),tK(e)),eK(e)),JU(e)),ZU(e)),{[`@media only screen and (max-width: ${e.screenLG}px)`]:{[`${t}-item`]:{"&-after-jump-prev, &-before-jump-next":{display:"none"}}},[`@media only screen and (max-width: ${e.screenSM}px)`]:{[`${t}-options`]:{display:"none"}}}),[`&${e.componentCls}-rtl`]:{direction:"rtl"}}},oK=e=>{const{componentCls:t}=e;return{[`${t}:not(${t}-disabled)`]:{[`${t}-item`]:Object.assign({},Xl(e)),[`${t}-jump-prev, ${t}-jump-next`]:{"&:focus-visible":Object.assign({[`${t}-item-link-icon`]:{opacity:1},[`${t}-item-ellipsis`]:{opacity:0}},qa(e))},[`${t}-prev, ${t}-next`]:{[`&:focus-visible ${t}-item-link`]:Object.assign({},qa(e))}}}},VM=e=>Object.assign({itemBg:e.colorBgContainer,itemSize:e.controlHeight,itemSizeSM:e.controlHeightSM,itemActiveBg:e.colorBgContainer,itemLinkBg:e.colorBgContainer,itemActiveColorDisabled:e.colorTextDisabled,itemActiveBgDisabled:e.controlItemBgActiveDisabled,itemInputBg:e.colorBgContainer,miniOptionsSizeChangerTop:0},Bw(e)),WM=e=>vn(e,{inputOutlineOffset:0,paginationMiniOptionsMarginInlineStart:e.calc(e.marginXXS).div(2).equal(),paginationMiniQuickJumperInputWidth:e.calc(e.controlHeightLG).mul(1.1).equal(),paginationItemPaddingInline:e.calc(e.marginXXS).mul(1.5).equal(),paginationEllipsisLetterSpacing:e.calc(e.marginXXS).div(2).equal(),paginationSlashMarginInlineStart:e.marginSM,paginationSlashMarginInlineEnd:e.marginSM,paginationEllipsisTextIndent:"0.13em"},Lw(e)),iK=In("Pagination",e=>{const t=WM(e);return[rK(t),oK(t)]},VM),aK=e=>{const{componentCls:t}=e;return{[`${t}${t}-bordered${t}-disabled:not(${t}-mini)`]:{"&, &:hover":{[`${t}-item-link`]:{borderColor:e.colorBorder}},"&:focus-visible":{[`${t}-item-link`]:{borderColor:e.colorBorder}},[`${t}-item, ${t}-item-link`]:{backgroundColor:e.colorBgContainerDisabled,borderColor:e.colorBorder,[`&:hover:not(${t}-item-active)`]:{backgroundColor:e.colorBgContainerDisabled,borderColor:e.colorBorder,a:{color:e.colorTextDisabled}},[`&${t}-item-active`]:{backgroundColor:e.itemActiveBgDisabled}},[`${t}-prev, ${t}-next`]:{"&:hover button":{backgroundColor:e.colorBgContainerDisabled,borderColor:e.colorBorder,color:e.colorTextDisabled},[`${t}-item-link`]:{backgroundColor:e.colorBgContainerDisabled,borderColor:e.colorBorder}}},[`${t}${t}-bordered:not(${t}-mini)`]:{[`${t}-prev, ${t}-next`]:{"&:hover button":{borderColor:e.colorPrimaryHover,backgroundColor:e.itemBg},[`${t}-item-link`]:{backgroundColor:e.itemLinkBg,borderColor:e.colorBorder},[`&:hover ${t}-item-link`]:{borderColor:e.colorPrimary,backgroundColor:e.itemBg,color:e.colorPrimary},[`&${t}-disabled`]:{[`${t}-item-link`]:{borderColor:e.colorBorder,color:e.colorTextDisabled}}},[`${t}-item`]:{backgroundColor:e.itemBg,border:`${de(e.lineWidth)} ${e.lineType} ${e.colorBorder}`,[`&:hover:not(${t}-item-active)`]:{borderColor:e.colorPrimary,backgroundColor:e.itemBg,a:{color:e.colorPrimary}},"&-active":{borderColor:e.colorPrimary}}}}},sK=ic(["Pagination","bordered"],e=>{const t=WM(e);return[aK(t)]},VM);var lK=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{const{align:t,prefixCls:n,selectPrefixCls:r,className:o,rootClassName:i,style:a,size:s,locale:c,selectComponentClass:u,responsive:p,showSizeChanger:v}=e,h=lK(e,["align","prefixCls","selectPrefixCls","className","rootClassName","style","size","locale","selectComponentClass","responsive","showSizeChanger"]),{xs:m}=yP(p),[,b]=Ir(),{getPrefixCls:y,direction:w,pagination:C={}}=d.useContext(ht),S=y("pagination",n),[E,k,O]=iK(S),$=v??C.showSizeChanger,T=d.useMemo(()=>{const _=d.createElement("span",{className:`${S}-item-ellipsis`},"•••"),H=d.createElement("button",{className:`${S}-item-link`,type:"button",tabIndex:-1},w==="rtl"?d.createElement(Yp,null):d.createElement(Db,null)),j=d.createElement("button",{className:`${S}-item-link`,type:"button",tabIndex:-1},w==="rtl"?d.createElement(Db,null):d.createElement(Yp,null)),L=d.createElement("a",{className:`${S}-item-link`},d.createElement("div",{className:`${S}-item-container`},w==="rtl"?d.createElement(Ak,{className:`${S}-item-link-icon`}):d.createElement(Bk,{className:`${S}-item-link-icon`}),_)),F=d.createElement("a",{className:`${S}-item-link`},d.createElement("div",{className:`${S}-item-container`},w==="rtl"?d.createElement(Bk,{className:`${S}-item-link-icon`}):d.createElement(Ak,{className:`${S}-item-link-icon`}),_));return{prevIcon:H,nextIcon:j,jumpPrevIcon:L,jumpNextIcon:F}},[w,S]),[M]=bi("Pagination",hI),P=Object.assign(Object.assign({},M),c),R=Go(s),A=R==="small"||!!(m&&!R&&p),V=y("select",r),z=ie({[`${S}-${t}`]:!!t,[`${S}-mini`]:A,[`${S}-rtl`]:w==="rtl",[`${S}-bordered`]:b.wireframe},C==null?void 0:C.className,o,i,k,O),B=Object.assign(Object.assign({},C==null?void 0:C.style),a);return E(d.createElement(d.Fragment,null,b.wireframe&&d.createElement(sK,{prefixCls:S}),d.createElement(QU,Object.assign({},T,h,{style:B,prefixCls:S,selectPrefixCls:V,className:z,selectComponentClass:u||(A?FM:_M),locale:P,showSizeChanger:$}))))},lv=100,UM=lv/5,KM=lv/2-UM/2,Hm=KM*2*Math.PI,Fk=50,_k=e=>{const{dotClassName:t,style:n,hasCircleCls:r}=e;return d.createElement("circle",{className:ie(`${t}-circle`,{[`${t}-circle-bg`]:r}),r:KM,cx:Fk,cy:Fk,strokeWidth:UM,style:n})},uK=e=>{let{percent:t,prefixCls:n}=e;const r=`${n}-dot`,o=`${r}-holder`,i=`${o}-hidden`,[a,s]=d.useState(!1);sn(()=>{t!==0&&s(!0)},[t!==0]);const c=Math.max(Math.min(t,100),0);if(!a)return null;const u={strokeDashoffset:`${Hm/4}`,strokeDasharray:`${Hm*c/100} ${Hm*(100-c)/100}`};return d.createElement("span",{className:ie(o,`${r}-progress`,c<=0&&i)},d.createElement("svg",{viewBox:`0 0 ${lv} ${lv}`,role:"progressbar","aria-valuemin":0,"aria-valuemax":100,"aria-valuenow":c},d.createElement(_k,{dotClassName:r,hasCircleCls:!0}),d.createElement(_k,{dotClassName:r,style:u})))};function dK(e){const{prefixCls:t,percent:n=0}=e,r=`${t}-dot`,o=`${r}-holder`,i=`${o}-hidden`;return d.createElement(d.Fragment,null,d.createElement("span",{className:ie(o,n>0&&i)},d.createElement("span",{className:ie(r,`${t}-dot-spin`)},[1,2,3,4].map(a=>d.createElement("i",{className:`${t}-dot-item`,key:a})))),d.createElement(uK,{prefixCls:t,percent:n}))}function fK(e){const{prefixCls:t,indicator:n,percent:r}=e,o=`${t}-dot`;return n&&d.isValidElement(n)?Dr(n,{className:ie(n.props.className,o),percent:r}):d.createElement(dK,{prefixCls:t,percent:r})}const pK=new fn("antSpinMove",{to:{opacity:1}}),vK=new fn("antRotate",{to:{transform:"rotate(405deg)"}}),hK=e=>{const{componentCls:t,calc:n}=e;return{[t]:Object.assign(Object.assign({},jn(e)),{position:"absolute",display:"none",color:e.colorPrimary,fontSize:0,textAlign:"center",verticalAlign:"middle",opacity:0,transition:`transform ${e.motionDurationSlow} ${e.motionEaseInOutCirc}`,"&-spinning":{position:"relative",display:"inline-block",opacity:1},[`${t}-text`]:{fontSize:e.fontSize,paddingTop:n(n(e.dotSize).sub(e.fontSize)).div(2).add(2).equal()},"&-fullscreen":{position:"fixed",width:"100vw",height:"100vh",backgroundColor:e.colorBgMask,zIndex:e.zIndexPopupBase,inset:0,display:"flex",alignItems:"center",flexDirection:"column",justifyContent:"center",opacity:0,visibility:"hidden",transition:`all ${e.motionDurationMid}`,"&-show":{opacity:1,visibility:"visible"},[t]:{[`${t}-dot-holder`]:{color:e.colorWhite},[`${t}-text`]:{color:e.colorTextLightSolid}}},"&-nested-loading":{position:"relative",[`> div > ${t}`]:{position:"absolute",top:0,insetInlineStart:0,zIndex:4,display:"block",width:"100%",height:"100%",maxHeight:e.contentHeight,[`${t}-dot`]:{position:"absolute",top:"50%",insetInlineStart:"50%",margin:n(e.dotSize).mul(-1).div(2).equal()},[`${t}-text`]:{position:"absolute",top:"50%",width:"100%",textShadow:`0 1px 2px ${e.colorBgContainer}`},[`&${t}-show-text ${t}-dot`]:{marginTop:n(e.dotSize).div(2).mul(-1).sub(10).equal()},"&-sm":{[`${t}-dot`]:{margin:n(e.dotSizeSM).mul(-1).div(2).equal()},[`${t}-text`]:{paddingTop:n(n(e.dotSizeSM).sub(e.fontSize)).div(2).add(2).equal()},[`&${t}-show-text ${t}-dot`]:{marginTop:n(e.dotSizeSM).div(2).mul(-1).sub(10).equal()}},"&-lg":{[`${t}-dot`]:{margin:n(e.dotSizeLG).mul(-1).div(2).equal()},[`${t}-text`]:{paddingTop:n(n(e.dotSizeLG).sub(e.fontSize)).div(2).add(2).equal()},[`&${t}-show-text ${t}-dot`]:{marginTop:n(e.dotSizeLG).div(2).mul(-1).sub(10).equal()}}},[`${t}-container`]:{position:"relative",transition:`opacity ${e.motionDurationSlow}`,"&::after":{position:"absolute",top:0,insetInlineEnd:0,bottom:0,insetInlineStart:0,zIndex:10,width:"100%",height:"100%",background:e.colorBgContainer,opacity:0,transition:`all ${e.motionDurationSlow}`,content:'""',pointerEvents:"none"}},[`${t}-blur`]:{clear:"both",opacity:.5,userSelect:"none",pointerEvents:"none","&::after":{opacity:.4,pointerEvents:"auto"}}},"&-tip":{color:e.spinDotDefault},[`${t}-dot-holder`]:{width:"1em",height:"1em",fontSize:e.dotSize,display:"inline-block",transition:`transform ${e.motionDurationSlow} ease, opacity ${e.motionDurationSlow} ease`,transformOrigin:"50% 50%",lineHeight:1,color:e.colorPrimary,"&-hidden":{transform:"scale(0.3)",opacity:0}},[`${t}-dot-progress`]:{position:"absolute",top:"50%",transform:"translate(-50%, -50%)",insetInlineStart:"50%"},[`${t}-dot`]:{position:"relative",display:"inline-block",fontSize:e.dotSize,width:"1em",height:"1em","&-item":{position:"absolute",display:"block",width:n(e.dotSize).sub(n(e.marginXXS).div(2)).div(2).equal(),height:n(e.dotSize).sub(n(e.marginXXS).div(2)).div(2).equal(),background:"currentColor",borderRadius:"100%",transform:"scale(0.75)",transformOrigin:"50% 50%",opacity:.3,animationName:pK,animationDuration:"1s",animationIterationCount:"infinite",animationTimingFunction:"linear",animationDirection:"alternate","&:nth-child(1)":{top:0,insetInlineStart:0,animationDelay:"0s"},"&:nth-child(2)":{top:0,insetInlineEnd:0,animationDelay:"0.4s"},"&:nth-child(3)":{insetInlineEnd:0,bottom:0,animationDelay:"0.8s"},"&:nth-child(4)":{bottom:0,insetInlineStart:0,animationDelay:"1.2s"}},"&-spin":{transform:"rotate(45deg)",animationName:vK,animationDuration:"1.2s",animationIterationCount:"infinite",animationTimingFunction:"linear"},"&-circle":{strokeLinecap:"round",transition:["stroke-dashoffset","stroke-dasharray","stroke","stroke-width","opacity"].map(r=>`${r} ${e.motionDurationSlow} ease`).join(","),fillOpacity:0,stroke:"currentcolor"},"&-circle-bg":{stroke:e.colorFillSecondary}},[`&-sm ${t}-dot`]:{"&, &-holder":{fontSize:e.dotSizeSM}},[`&-sm ${t}-dot-holder`]:{i:{width:n(n(e.dotSizeSM).sub(n(e.marginXXS).div(2))).div(2).equal(),height:n(n(e.dotSizeSM).sub(n(e.marginXXS).div(2))).div(2).equal()}},[`&-lg ${t}-dot`]:{"&, &-holder":{fontSize:e.dotSizeLG}},[`&-lg ${t}-dot-holder`]:{i:{width:n(n(e.dotSizeLG).sub(e.marginXXS)).div(2).equal(),height:n(n(e.dotSizeLG).sub(e.marginXXS)).div(2).equal()}},[`&${t}-show-text ${t}-text`]:{display:"block"}})}},gK=e=>{const{controlHeightLG:t,controlHeight:n}=e;return{contentHeight:400,dotSize:t/2,dotSizeSM:t*.35,dotSizeLG:n}},mK=In("Spin",e=>{const t=vn(e,{spinDotDefault:e.colorTextDescription});return[hK(t)]},gK),bK=200,Vk=[[30,.05],[70,.03],[96,.01]];function yK(e,t){const[n,r]=d.useState(0),o=d.useRef(),i=t==="auto";return d.useEffect(()=>(i&&e&&(r(0),o.current=setInterval(()=>{r(a=>{const s=100-a;for(let c=0;c{clearInterval(o.current)}),[i,e]),i?n:t}var wK=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{var t;const{prefixCls:n,spinning:r=!0,delay:o=0,className:i,rootClassName:a,size:s="default",tip:c,wrapperClassName:u,style:p,children:v,fullscreen:h=!1,indicator:m,percent:b}=e,y=wK(e,["prefixCls","spinning","delay","className","rootClassName","size","tip","wrapperClassName","style","children","fullscreen","indicator","percent"]),{getPrefixCls:w,direction:C,spin:S}=d.useContext(ht),E=w("spin",n),[k,O,$]=mK(E),[T,M]=d.useState(()=>r&&!xK(r,o)),P=yK(T,b);d.useEffect(()=>{if(r){const H=wV(o,()=>{M(!0)});return H(),()=>{var j;(j=H==null?void 0:H.cancel)===null||j===void 0||j.call(H)}}M(!1)},[o,r]);const R=d.useMemo(()=>typeof v<"u"&&!h,[v,h]),A=ie(E,S==null?void 0:S.className,{[`${E}-sm`]:s==="small",[`${E}-lg`]:s==="large",[`${E}-spinning`]:T,[`${E}-show-text`]:!!c,[`${E}-rtl`]:C==="rtl"},i,!h&&a,O,$),V=ie(`${E}-container`,{[`${E}-blur`]:T}),z=(t=m??(S==null?void 0:S.indicator))!==null&&t!==void 0?t:qM,B=Object.assign(Object.assign({},S==null?void 0:S.style),p),_=d.createElement("div",Object.assign({},y,{style:B,className:A,"aria-live":"polite","aria-busy":T}),d.createElement(fK,{prefixCls:E,indicator:z,percent:P}),c&&(R||h)?d.createElement("div",{className:`${E}-text`},c):null);return k(R?d.createElement("div",Object.assign({},y,{className:ie(`${E}-nested-loading`,u,O,$)}),T&&d.createElement("div",{key:"loading"},_),d.createElement("div",{className:V,key:"container"},v)):h?d.createElement("div",{className:ie(`${E}-fullscreen`,{[`${E}-fullscreen-show`]:T},a,O,$)},_):_)};XM.setDefaultIndicator=e=>{qM=e};function SK(e){return(arguments.length>1&&arguments[1]!==void 0?arguments[1]:!1)&&e==null?[]:Array.isArray(e)?e:[e]}var CK=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{const{prefixCls:t,className:n,closeIcon:r,closable:o,type:i,title:a,children:s,footer:c}=e,u=CK(e,["prefixCls","className","closeIcon","closable","type","title","children","footer"]),{getPrefixCls:p}=d.useContext(ht),v=p(),h=t||p("modal"),m=br(v),[b,y,w]=DT(h,m),C=`${h}-confirm`;let S={};return i?S={closable:o??!1,title:"",footer:"",children:d.createElement(LT,Object.assign({},e,{prefixCls:h,confirmPrefixCls:C,rootPrefixCls:v,content:s}))}:S={closable:o??!0,title:a,footer:c!==null&&d.createElement(MT,Object.assign({},e)),children:s},b(d.createElement(hT,Object.assign({prefixCls:h,className:ie(y,`${h}-pure-panel`,i&&C,i&&`${C}-${i}`,n,w,m)},u,{closeIcon:PT(h,r),closable:o},S)))},kK=UT(EK);function GM(e){return kd(HT(e))}const wi=jT;wi.useModal=oF;wi.info=function(t){return kd(FT(t))};wi.success=function(t){return kd(_T(t))};wi.error=function(t){return kd(VT(t))};wi.warning=GM;wi.warn=GM;wi.confirm=function(t){return kd(WT(t))};wi.destroyAll=function(){for(;ws.length;){const t=ws.pop();t&&t()}};wi.config=JH;wi._InternalPanelDoNotUseOrYouWillBeFired=kK;const OK=e=>{const{componentCls:t,iconCls:n,antCls:r,zIndexPopup:o,colorText:i,colorWarning:a,marginXXS:s,marginXS:c,fontSize:u,fontWeightStrong:p,colorTextHeading:v}=e;return{[t]:{zIndex:o,[`&${r}-popover`]:{fontSize:u},[`${t}-message`]:{marginBottom:c,display:"flex",flexWrap:"nowrap",alignItems:"start",[`> ${t}-message-icon ${n}`]:{color:a,fontSize:u,lineHeight:1,marginInlineEnd:c},[`${t}-title`]:{fontWeight:p,color:v,"&:only-child":{fontWeight:"normal"}},[`${t}-description`]:{marginTop:s,color:i}},[`${t}-buttons`]:{textAlign:"end",whiteSpace:"nowrap",button:{marginInlineStart:c}}}}},$K=e=>{const{zIndexPopupBase:t}=e;return{zIndexPopup:t+60}},YM=In("Popconfirm",e=>OK(e),$K,{resetStyle:!1});var IK=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{const{prefixCls:t,okButtonProps:n,cancelButtonProps:r,title:o,description:i,cancelText:a,okText:s,okType:c="primary",icon:u=d.createElement(Nv,null),showCancel:p=!0,close:v,onConfirm:h,onCancel:m,onPopupClick:b}=e,{getPrefixCls:y}=d.useContext(ht),[w]=bi("Popconfirm",hi.Popconfirm),C=Ql(o),S=Ql(i);return d.createElement("div",{className:`${t}-inner-content`,onClick:b},d.createElement("div",{className:`${t}-message`},u&&d.createElement("span",{className:`${t}-message-icon`},u),d.createElement("div",{className:`${t}-message-text`},C&&d.createElement("div",{className:`${t}-title`},C),S&&d.createElement("div",{className:`${t}-description`},S))),d.createElement("div",{className:`${t}-buttons`},p&&d.createElement(jr,Object.assign({onClick:m,size:"small"},r),a||(w==null?void 0:w.cancelText)),d.createElement(fw,{buttonProps:Object.assign(Object.assign({size:"small"},ew(c)),n),actionFn:h,close:v,prefixCls:y("btn"),quitOnNullishReturnValue:!0,emitEvent:!0},s||(w==null?void 0:w.okText))))},TK=e=>{const{prefixCls:t,placement:n,className:r,style:o}=e,i=IK(e,["prefixCls","placement","className","style"]),{getPrefixCls:a}=d.useContext(ht),s=a("popconfirm",t),[c]=YM(s);return c(d.createElement($P,{placement:n,className:ie(s,r),style:o,content:d.createElement(QM,Object.assign({prefixCls:s},i))}))};var PK=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{var n,r;const{prefixCls:o,placement:i="top",trigger:a="click",okType:s="primary",icon:c=d.createElement(Nv,null),children:u,overlayClassName:p,onOpenChange:v,onVisibleChange:h}=e,m=PK(e,["prefixCls","placement","trigger","okType","icon","children","overlayClassName","onOpenChange","onVisibleChange"]),{getPrefixCls:b}=d.useContext(ht),[y,w]=Dn(!1,{value:(n=e.open)!==null&&n!==void 0?n:e.visible,defaultValue:(r=e.defaultOpen)!==null&&r!==void 0?r:e.defaultVisible}),C=(P,R)=>{w(P,!0),h==null||h(P),v==null||v(P,R)},S=P=>{C(!1,P)},E=P=>{var R;return(R=e.onConfirm)===null||R===void 0?void 0:R.call(void 0,P)},k=P=>{var R;C(!1,P),(R=e.onCancel)===null||R===void 0||R.call(void 0,P)},O=(P,R)=>{const{disabled:A=!1}=e;A||C(P,R)},$=b("popconfirm",o),T=ie($,p),[M]=YM($);return M(d.createElement(IP,Object.assign({},Ln(m,["title"]),{trigger:a,placement:i,onOpenChange:O,open:y,ref:t,overlayClassName:T,content:d.createElement(QM,Object.assign({okType:s,icon:c},e,{prefixCls:$,close:S,onConfirm:E,onCancel:k})),"data-popover-inject":!0}),u))}),ZM=MK;ZM._InternalPanelDoNotUseOrYouWillBeFired=TK;var NK={percent:0,prefixCls:"rc-progress",strokeColor:"#2db7f5",strokeLinecap:"round",strokeWidth:1,trailColor:"#D9D9D9",trailWidth:1,gapPosition:"bottom"},RK=function(){var t=d.useRef([]),n=d.useRef(null);return d.useEffect(function(){var r=Date.now(),o=!1;t.current.forEach(function(i){if(i){o=!0;var a=i.style;a.transitionDuration=".3s, .3s, .3s, .06s",n.current&&r-n.current<100&&(a.transitionDuration="0s, 0s")}}),o&&(n.current=Date.now())}),t.current},Wk=0,DK=$r();function jK(){var e;return DK?(e=Wk,Wk+=1):e="TEST_OR_SSR",e}const LK=function(e){var t=d.useState(),n=ve(t,2),r=n[0],o=n[1];return d.useEffect(function(){o("rc_progress_".concat(jK()))},[]),e||r};var Uk=function(t){var n=t.bg,r=t.children;return d.createElement("div",{style:{width:"100%",height:"100%",background:n}},r)};function Kk(e,t){return Object.keys(e).map(function(n){var r=parseFloat(n),o="".concat(Math.floor(r*t),"%");return"".concat(e[n]," ").concat(o)})}var BK=d.forwardRef(function(e,t){var n=e.prefixCls,r=e.color,o=e.gradientId,i=e.radius,a=e.style,s=e.ptg,c=e.strokeLinecap,u=e.strokeWidth,p=e.size,v=e.gapDegree,h=r&&st(r)==="object",m=h?"#FFF":void 0,b=p/2,y=d.createElement("circle",{className:"".concat(n,"-circle-path"),r:i,cx:b,cy:b,stroke:m,strokeLinecap:c,strokeWidth:u,opacity:s===0?0:1,style:a,ref:t});if(!h)return y;var w="".concat(o,"-conic"),C=v?"".concat(180+v/2,"deg"):"0deg",S=Kk(r,(360-v)/360),E=Kk(r,1),k="conic-gradient(from ".concat(C,", ").concat(S.join(", "),")"),O="linear-gradient(to ".concat(v?"bottom":"top",", ").concat(E.join(", "),")");return d.createElement(d.Fragment,null,d.createElement("mask",{id:w},y),d.createElement("foreignObject",{x:0,y:0,width:p,height:p,mask:"url(#".concat(w,")")},d.createElement(Uk,{bg:O},d.createElement(Uk,{bg:k}))))}),vu=100,Fm=function(t,n,r,o,i,a,s,c,u,p){var v=arguments.length>10&&arguments[10]!==void 0?arguments[10]:0,h=r/100*360*((360-a)/360),m=a===0?0:{bottom:0,top:180,left:90,right:-90}[s],b=(100-o)/100*n;u==="round"&&o!==100&&(b+=p/2,b>=n&&(b=n-.01));var y=vu/2;return{stroke:typeof c=="string"?c:void 0,strokeDasharray:"".concat(n,"px ").concat(t),strokeDashoffset:b+v,transform:"rotate(".concat(i+h+m,"deg)"),transformOrigin:"".concat(y,"px ").concat(y,"px"),transition:"stroke-dashoffset .3s ease 0s, stroke-dasharray .3s ease 0s, stroke .3s, stroke-width .06s ease .3s, opacity .3s ease 0s",fillOpacity:0}},AK=["id","prefixCls","steps","strokeWidth","trailWidth","gapDegree","gapPosition","trailColor","strokeLinecap","style","className","strokeColor","percent"];function qk(e){var t=e??[];return Array.isArray(t)?t:[t]}var zK=function(t){var n=Z(Z({},NK),t),r=n.id,o=n.prefixCls,i=n.steps,a=n.strokeWidth,s=n.trailWidth,c=n.gapDegree,u=c===void 0?0:c,p=n.gapPosition,v=n.trailColor,h=n.strokeLinecap,m=n.style,b=n.className,y=n.strokeColor,w=n.percent,C=Mt(n,AK),S=vu/2,E=LK(r),k="".concat(E,"-gradient"),O=S-a/2,$=Math.PI*2*O,T=u>0?90+u/2:-90,M=$*((360-u)/360),P=st(i)==="object"?i:{count:i,gap:2},R=P.count,A=P.gap,V=qk(w),z=qk(y),B=z.find(function(D){return D&&st(D)==="object"}),_=B&&st(B)==="object",H=_?"butt":h,j=Fm($,M,0,100,T,u,p,v,H,a),L=RK(),F=function(){var W=0;return V.map(function(G,q){var J=z[q]||z[z.length-1],Y=Fm($,M,W,G,T,u,p,J,H,a);return W+=G,d.createElement(BK,{key:q,color:J,ptg:G,radius:O,prefixCls:o,gradientId:k,style:Y,strokeLinecap:H,strokeWidth:a,gapDegree:u,ref:function(te){L[q]=te},size:vu})}).reverse()},U=function(){var W=Math.round(R*(V[0]/100)),G=100/R,q=0;return new Array(R).fill(null).map(function(J,Y){var Q=Y<=W-1?z[0]:v,te=Q&&st(Q)==="object"?"url(#".concat(k,")"):void 0,ce=Fm($,M,q,G,T,u,p,Q,"butt",a,A);return q+=(M-ce.strokeDashoffset+A)*100/M,d.createElement("circle",{key:Y,className:"".concat(o,"-circle-path"),r:O,cx:S,cy:S,stroke:te,strokeWidth:a,opacity:1,style:ce,ref:function(ne){L[Y]=ne}})})};return d.createElement("svg",$e({className:ie("".concat(o,"-circle"),b),viewBox:"0 0 ".concat(vu," ").concat(vu),style:m,id:r,role:"presentation"},C),!R&&d.createElement("circle",{className:"".concat(o,"-circle-trail"),r:O,cx:S,cy:S,stroke:v,strokeLinecap:H,strokeWidth:s||a,style:j}),R?U():F())};function Va(e){return!e||e<0?0:e>100?100:e}function cv(e){let{success:t,successPercent:n}=e,r=n;return t&&"progress"in t&&(r=t.progress),t&&"percent"in t&&(r=t.percent),r}const HK=e=>{let{percent:t,success:n,successPercent:r}=e;const o=Va(cv({success:n,successPercent:r}));return[o,Va(Va(t)-o)]},FK=e=>{let{success:t={},strokeColor:n}=e;const{strokeColor:r}=t;return[r||Tl.green,n||null]},oh=(e,t,n)=>{var r,o,i,a;let s=-1,c=-1;if(t==="step"){const u=n.steps,p=n.strokeWidth;typeof e=="string"||typeof e>"u"?(s=e==="small"?2:14,c=p??8):typeof e=="number"?[s,c]=[e,e]:[s=14,c=8]=Array.isArray(e)?e:[e.width,e.height],s*=u}else if(t==="line"){const u=n==null?void 0:n.strokeWidth;typeof e=="string"||typeof e>"u"?c=u||(e==="small"?6:8):typeof e=="number"?[s,c]=[e,e]:[s=-1,c=8]=Array.isArray(e)?e:[e.width,e.height]}else(t==="circle"||t==="dashboard")&&(typeof e=="string"||typeof e>"u"?[s,c]=e==="small"?[60,60]:[120,120]:typeof e=="number"?[s,c]=[e,e]:Array.isArray(e)&&(s=(o=(r=e[0])!==null&&r!==void 0?r:e[1])!==null&&o!==void 0?o:120,c=(a=(i=e[0])!==null&&i!==void 0?i:e[1])!==null&&a!==void 0?a:120));return[s,c]},_K=3,VK=e=>_K/e*100,WK=e=>{const{prefixCls:t,trailColor:n=null,strokeLinecap:r="round",gapPosition:o,gapDegree:i,width:a=120,type:s,children:c,success:u,size:p=a,steps:v}=e,[h,m]=oh(p,"circle");let{strokeWidth:b}=e;b===void 0&&(b=Math.max(VK(h),6));const y={width:h,height:m,fontSize:h*.15+6},w=d.useMemo(()=>{if(i||i===0)return i;if(s==="dashboard")return 75},[i,s]),C=HK(e),S=o||s==="dashboard"&&"bottom"||void 0,E=Object.prototype.toString.call(e.strokeColor)==="[object Object]",k=FK({success:u,strokeColor:e.strokeColor}),O=ie(`${t}-inner`,{[`${t}-circle-gradient`]:E}),$=d.createElement(zK,{steps:v,percent:v?C[1]:C,strokeWidth:b,trailWidth:b,strokeColor:v?k[1]:k,strokeLinecap:r,trailColor:n,prefixCls:t,gapDegree:w,gapPosition:S}),T=h<=20,M=d.createElement("div",{className:O,style:y},$,!T&&c);return T?d.createElement(gi,{title:c},M):M},uv="--progress-line-stroke-color",JM="--progress-percent",Xk=e=>{const t=e?"100%":"-100%";return new fn(`antProgress${e?"RTL":"LTR"}Active`,{"0%":{transform:`translateX(${t}) scaleX(0)`,opacity:.1},"20%":{transform:`translateX(${t}) scaleX(0)`,opacity:.5},to:{transform:"translateX(0) scaleX(1)",opacity:0}})},UK=e=>{const{componentCls:t,iconCls:n}=e;return{[t]:Object.assign(Object.assign({},jn(e)),{display:"inline-block","&-rtl":{direction:"rtl"},"&-line":{position:"relative",width:"100%",fontSize:e.fontSize},[`${t}-outer`]:{display:"inline-flex",alignItems:"center",width:"100%"},[`${t}-inner`]:{position:"relative",display:"inline-block",width:"100%",flex:1,overflow:"hidden",verticalAlign:"middle",backgroundColor:e.remainingColor,borderRadius:e.lineBorderRadius},[`${t}-inner:not(${t}-circle-gradient)`]:{[`${t}-circle-path`]:{stroke:e.defaultColor}},[`${t}-success-bg, ${t}-bg`]:{position:"relative",background:e.defaultColor,borderRadius:e.lineBorderRadius,transition:`all ${e.motionDurationSlow} ${e.motionEaseInOutCirc}`},[`${t}-layout-bottom`]:{display:"flex",flexDirection:"column",alignItems:"center",justifyContent:"center",[`${t}-text`]:{width:"max-content",marginInlineStart:0,marginTop:e.marginXXS}},[`${t}-bg`]:{overflow:"hidden","&::after":{content:'""',background:{_multi_value_:!0,value:["inherit",`var(${uv})`]},height:"100%",width:`calc(1 / var(${JM}) * 100%)`,display:"block"},[`&${t}-bg-inner`]:{minWidth:"max-content","&::after":{content:"none"},[`${t}-text-inner`]:{color:e.colorWhite,[`&${t}-text-bright`]:{color:"rgba(0, 0, 0, 0.45)"}}}},[`${t}-success-bg`]:{position:"absolute",insetBlockStart:0,insetInlineStart:0,backgroundColor:e.colorSuccess},[`${t}-text`]:{display:"inline-block",marginInlineStart:e.marginXS,color:e.colorText,lineHeight:1,width:"2em",whiteSpace:"nowrap",textAlign:"start",verticalAlign:"middle",wordBreak:"normal",[n]:{fontSize:e.fontSize},[`&${t}-text-outer`]:{width:"max-content"},[`&${t}-text-outer${t}-text-start`]:{width:"max-content",marginInlineStart:0,marginInlineEnd:e.marginXS}},[`${t}-text-inner`]:{display:"flex",justifyContent:"center",alignItems:"center",width:"100%",height:"100%",marginInlineStart:0,padding:`0 ${de(e.paddingXXS)}`,[`&${t}-text-start`]:{justifyContent:"start"},[`&${t}-text-end`]:{justifyContent:"end"}},[`&${t}-status-active`]:{[`${t}-bg::before`]:{position:"absolute",inset:0,backgroundColor:e.colorBgContainer,borderRadius:e.lineBorderRadius,opacity:0,animationName:Xk(),animationDuration:e.progressActiveMotionDuration,animationTimingFunction:e.motionEaseOutQuint,animationIterationCount:"infinite",content:'""'}},[`&${t}-rtl${t}-status-active`]:{[`${t}-bg::before`]:{animationName:Xk(!0)}},[`&${t}-status-exception`]:{[`${t}-bg`]:{backgroundColor:e.colorError},[`${t}-text`]:{color:e.colorError}},[`&${t}-status-exception ${t}-inner:not(${t}-circle-gradient)`]:{[`${t}-circle-path`]:{stroke:e.colorError}},[`&${t}-status-success`]:{[`${t}-bg`]:{backgroundColor:e.colorSuccess},[`${t}-text`]:{color:e.colorSuccess}},[`&${t}-status-success ${t}-inner:not(${t}-circle-gradient)`]:{[`${t}-circle-path`]:{stroke:e.colorSuccess}}})}},KK=e=>{const{componentCls:t,iconCls:n}=e;return{[t]:{[`${t}-circle-trail`]:{stroke:e.remainingColor},[`&${t}-circle ${t}-inner`]:{position:"relative",lineHeight:1,backgroundColor:"transparent"},[`&${t}-circle ${t}-text`]:{position:"absolute",insetBlockStart:"50%",insetInlineStart:0,width:"100%",margin:0,padding:0,color:e.circleTextColor,fontSize:e.circleTextFontSize,lineHeight:1,whiteSpace:"normal",textAlign:"center",transform:"translateY(-50%)",[n]:{fontSize:e.circleIconFontSize}},[`${t}-circle&-status-exception`]:{[`${t}-text`]:{color:e.colorError}},[`${t}-circle&-status-success`]:{[`${t}-text`]:{color:e.colorSuccess}}},[`${t}-inline-circle`]:{lineHeight:1,[`${t}-inner`]:{verticalAlign:"bottom"}}}},qK=e=>{const{componentCls:t}=e;return{[t]:{[`${t}-steps`]:{display:"inline-block","&-outer":{display:"flex",flexDirection:"row",alignItems:"center"},"&-item":{flexShrink:0,minWidth:e.progressStepMinWidth,marginInlineEnd:e.progressStepMarginInlineEnd,backgroundColor:e.remainingColor,transition:`all ${e.motionDurationSlow}`,"&-active":{backgroundColor:e.defaultColor}}}}}},XK=e=>{const{componentCls:t,iconCls:n}=e;return{[t]:{[`${t}-small&-line, ${t}-small&-line ${t}-text ${n}`]:{fontSize:e.fontSizeSM}}}},GK=e=>({circleTextColor:e.colorText,defaultColor:e.colorInfo,remainingColor:e.colorFillSecondary,lineBorderRadius:100,circleTextFontSize:"1em",circleIconFontSize:`${e.fontSize/e.fontSizeSM}em`}),YK=In("Progress",e=>{const t=e.calc(e.marginXXS).div(2).equal(),n=vn(e,{progressStepMarginInlineEnd:t,progressStepMinWidth:t,progressActiveMotionDuration:"2.4s"});return[UK(n),KK(n),qK(n),XK(n)]},GK);var QK=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{let t=[];return Object.keys(e).forEach(n=>{const r=parseFloat(n.replace(/%/g,""));isNaN(r)||t.push({key:r,value:e[n]})}),t=t.sort((n,r)=>n.key-r.key),t.map(n=>{let{key:r,value:o}=n;return`${o} ${r}%`}).join(", ")},JK=(e,t)=>{const{from:n=Tl.blue,to:r=Tl.blue,direction:o=t==="rtl"?"to left":"to right"}=e,i=QK(e,["from","to","direction"]);if(Object.keys(i).length!==0){const s=ZK(i),c=`linear-gradient(${o}, ${s})`;return{background:c,[uv]:c}}const a=`linear-gradient(${o}, ${n}, ${r})`;return{background:a,[uv]:a}},eq=e=>{const{prefixCls:t,direction:n,percent:r,size:o,strokeWidth:i,strokeColor:a,strokeLinecap:s="round",children:c,trailColor:u=null,percentPosition:p,success:v}=e,{align:h,type:m}=p,b=a&&typeof a!="string"?JK(a,n):{[uv]:a,background:a},y=s==="square"||s==="butt"?0:void 0,w=o??[-1,i||(o==="small"?6:8)],[C,S]=oh(w,"line",{strokeWidth:i}),E={backgroundColor:u||void 0,borderRadius:y},k=Object.assign(Object.assign({width:`${Va(r)}%`,height:S,borderRadius:y},b),{[JM]:Va(r)/100}),O=cv(e),$={width:`${Va(O)}%`,height:S,borderRadius:y,backgroundColor:v==null?void 0:v.strokeColor},T={width:C<0?"100%":C},M=d.createElement("div",{className:`${t}-inner`,style:E},d.createElement("div",{className:ie(`${t}-bg`,`${t}-bg-${m}`),style:k},m==="inner"&&c),O!==void 0&&d.createElement("div",{className:`${t}-success-bg`,style:$})),P=m==="outer"&&h==="start",R=m==="outer"&&h==="end";return m==="outer"&&h==="center"?d.createElement("div",{className:`${t}-layout-bottom`},M,c):d.createElement("div",{className:`${t}-outer`,style:T},P&&c,M,R&&c)},tq=e=>{const{size:t,steps:n,percent:r=0,strokeWidth:o=8,strokeColor:i,trailColor:a=null,prefixCls:s,children:c}=e,u=Math.round(n*(r/100)),v=t??[t==="small"?2:14,o],[h,m]=oh(v,"step",{steps:n,strokeWidth:o}),b=h/n,y=new Array(n);for(let w=0;w{const{prefixCls:n,className:r,rootClassName:o,steps:i,strokeColor:a,percent:s=0,size:c="default",showInfo:u=!0,type:p="line",status:v,format:h,style:m,percentPosition:b={}}=e,y=nq(e,["prefixCls","className","rootClassName","steps","strokeColor","percent","size","showInfo","type","status","format","style","percentPosition"]),{align:w="end",type:C="outer"}=b,S=Array.isArray(a)?a[0]:a,E=typeof a=="string"||Array.isArray(a)?a:void 0,k=d.useMemo(()=>{if(S){const F=typeof S=="string"?S:Object.values(S)[0];return new xn(F).isLight()}return!1},[a]),O=d.useMemo(()=>{var F,U;const D=cv(e);return parseInt(D!==void 0?(F=D??0)===null||F===void 0?void 0:F.toString():(U=s??0)===null||U===void 0?void 0:U.toString(),10)},[s,e.success,e.successPercent]),$=d.useMemo(()=>!rq.includes(v)&&O>=100?"success":v||"normal",[v,O]),{getPrefixCls:T,direction:M,progress:P}=d.useContext(ht),R=T("progress",n),[A,V,z]=YK(R),B=p==="line",_=B&&!i,H=d.useMemo(()=>{if(!u)return null;const F=cv(e);let U;const D=h||(G=>`${G}%`),W=B&&k&&C==="inner";return C==="inner"||h||$!=="exception"&&$!=="success"?U=D(Va(s),Va(F)):$==="exception"?U=B?d.createElement(bd,null):d.createElement(yd,null):$==="success"&&(U=B?d.createElement(Jy,null):d.createElement(Cw,null)),d.createElement("span",{className:ie(`${R}-text`,{[`${R}-text-bright`]:W,[`${R}-text-${w}`]:_,[`${R}-text-${C}`]:_}),title:typeof U=="string"?U:void 0},U)},[u,s,O,$,p,R,h]);let j;p==="line"?j=i?d.createElement(tq,Object.assign({},e,{strokeColor:E,prefixCls:R,steps:typeof i=="object"?i.count:i}),H):d.createElement(eq,Object.assign({},e,{strokeColor:S,prefixCls:R,direction:M,percentPosition:{align:w,type:C}}),H):(p==="circle"||p==="dashboard")&&(j=d.createElement(WK,Object.assign({},e,{strokeColor:S,prefixCls:R,progressStatus:$}),H));const L=ie(R,`${R}-status-${$}`,{[`${R}-${p==="dashboard"&&"circle"||p}`]:p!=="line",[`${R}-inline-circle`]:p==="circle"&&oh(c,"circle")[0]<=20,[`${R}-line`]:_,[`${R}-line-align-${w}`]:_,[`${R}-line-position-${C}`]:_,[`${R}-steps`]:i,[`${R}-show-info`]:u,[`${R}-${c}`]:typeof c=="string",[`${R}-rtl`]:M==="rtl"},P==null?void 0:P.className,r,o,V,z);return A(d.createElement("div",Object.assign({ref:t,style:Object.assign(Object.assign({},P==null?void 0:P.style),m),className:L,role:"progressbar","aria-valuenow":O,"aria-valuemin":0,"aria-valuemax":100},Ln(y,["trailColor","strokeWidth","width","gapDegree","gapPosition","strokeLinecap","success","successPercent"])),j))});var iq={icon:{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M869 487.8L491.2 159.9c-2.9-2.5-6.6-3.9-10.5-3.9h-88.5c-7.4 0-10.8 9.2-5.2 14l350.2 304H152c-4.4 0-8 3.6-8 8v60c0 4.4 3.6 8 8 8h585.1L386.9 854c-5.6 4.9-2.2 14 5.2 14h91.5c1.9 0 3.8-.7 5.2-2L869 536.2a32.07 32.07 0 000-48.4z"}}]},name:"arrow-right",theme:"outlined"},aq=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:iq}))},sq=d.forwardRef(aq),lq={icon:{tag:"svg",attrs:{viewBox:"0 0 1024 1024",focusable:"false"},children:[{tag:"path",attrs:{d:"M840.4 300H183.6c-19.7 0-30.7 20.8-18.5 35l328.4 380.8c9.4 10.9 27.5 10.9 37 0L858.9 335c12.2-14.2 1.2-35-18.5-35z"}}]},name:"caret-down",theme:"filled"},cq=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:lq}))},uq=d.forwardRef(cq),dq={icon:{tag:"svg",attrs:{viewBox:"0 0 1024 1024",focusable:"false"},children:[{tag:"path",attrs:{d:"M840.4 300H183.6c-19.7 0-30.7 20.8-18.5 35l328.4 380.8c9.4 10.9 27.5 10.9 37 0L858.9 335c12.2-14.2 1.2-35-18.5-35z"}}]},name:"caret-down",theme:"outlined"},fq=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:dq}))},pq=d.forwardRef(fq),vq={icon:{tag:"svg",attrs:{viewBox:"0 0 1024 1024",focusable:"false"},children:[{tag:"path",attrs:{d:"M858.9 689L530.5 308.2c-9.4-10.9-27.5-10.9-37 0L165.1 689c-12.2 14.2-1.2 35 18.5 35h656.8c19.7 0 30.7-20.8 18.5-35z"}}]},name:"caret-up",theme:"outlined"},hq=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:vq}))},gq=d.forwardRef(hq),mq={icon:{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M832 64H296c-4.4 0-8 3.6-8 8v56c0 4.4 3.6 8 8 8h496v688c0 4.4 3.6 8 8 8h56c4.4 0 8-3.6 8-8V96c0-17.7-14.3-32-32-32zM704 192H192c-17.7 0-32 14.3-32 32v530.7c0 8.5 3.4 16.6 9.4 22.6l173.3 173.3c2.2 2.2 4.7 4 7.4 5.5v1.9h4.2c3.5 1.3 7.2 2 11 2H704c17.7 0 32-14.3 32-32V224c0-17.7-14.3-32-32-32zM350 856.2L263.9 770H350v86.2zM664 888H414V746c0-22.1-17.9-40-40-40H232V264h432v624z"}}]},name:"copy",theme:"outlined"},bq=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:mq}))},yq=d.forwardRef(bq),wq={icon:function(t,n){return{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M911.9 283.9v.5L835.5 865c-1 8-7.9 14-15.9 14H204.5c-8.1 0-14.9-6.1-16-14l-76.4-580.6v-.6 1.6L188.5 866c1.1 7.9 7.9 14 16 14h615.1c8 0 14.9-6 15.9-14l76.4-580.6c.1-.5.1-1 0-1.5z",fill:n}},{tag:"path",attrs:{d:"M773.6 810.6l53.9-409.4-139.8 86.1L512 252.9 336.3 487.3l-139.8-86.1 53.8 409.4h523.3zm-374.2-189c0-62.1 50.5-112.6 112.6-112.6s112.6 50.5 112.6 112.6v1c0 62.1-50.5 112.6-112.6 112.6s-112.6-50.5-112.6-112.6v-1z",fill:n}},{tag:"path",attrs:{d:"M512 734.2c61.9 0 112.3-50.2 112.6-112.1v-.5c0-62.1-50.5-112.6-112.6-112.6s-112.6 50.5-112.6 112.6v.5c.3 61.9 50.7 112.1 112.6 112.1zm0-160.9c26.6 0 48.2 21.6 48.2 48.3 0 26.6-21.6 48.3-48.2 48.3s-48.2-21.6-48.2-48.3c0-26.6 21.6-48.3 48.2-48.3z",fill:t}},{tag:"path",attrs:{d:"M188.5 865c1.1 7.9 7.9 14 16 14h615.1c8 0 14.9-6 15.9-14l76.4-580.6v-.5c.3-6.4-6.7-10.8-12.3-7.4L705 396.4 518.4 147.5a8.06 8.06 0 00-12.9 0L319 396.4 124.3 276.5c-5.5-3.4-12.6.9-12.2 7.3v.6L188.5 865zm147.8-377.7L512 252.9l175.7 234.4 139.8-86.1-53.9 409.4H250.3l-53.8-409.4 139.8 86.1z",fill:t}}]}},name:"crown",theme:"twotone"},xq=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:wq}))},eN=d.forwardRef(xq),Sq={icon:{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M360 184h-8c4.4 0 8-3.6 8-8v8h304v-8c0 4.4 3.6 8 8 8h-8v72h72v-80c0-35.3-28.7-64-64-64H352c-35.3 0-64 28.7-64 64v80h72v-72zm504 72H160c-17.7 0-32 14.3-32 32v32c0 4.4 3.6 8 8 8h60.4l24.7 523c1.6 34.1 29.8 61 63.9 61h454c34.2 0 62.3-26.8 63.9-61l24.7-523H888c4.4 0 8-3.6 8-8v-32c0-17.7-14.3-32-32-32zM731.3 840H292.7l-24.2-512h487l-24.2 512z"}}]},name:"delete",theme:"outlined"},Cq=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:Sq}))},Eq=d.forwardRef(Cq),kq={icon:{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M505.7 661a8 8 0 0012.6 0l112-141.7c4.1-5.2.4-12.9-6.3-12.9h-74.1V168c0-4.4-3.6-8-8-8h-60c-4.4 0-8 3.6-8 8v338.3H400c-6.7 0-10.4 7.7-6.3 12.9l112 141.8zM878 626h-60c-4.4 0-8 3.6-8 8v154H214V634c0-4.4-3.6-8-8-8h-60c-4.4 0-8 3.6-8 8v198c0 17.7 14.3 32 32 32h684c17.7 0 32-14.3 32-32V634c0-4.4-3.6-8-8-8z"}}]},name:"download",theme:"outlined"},Oq=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:kq}))},$q=d.forwardRef(Oq),Iq={icon:{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M257.7 752c2 0 4-.2 6-.5L431.9 722c2-.4 3.9-1.3 5.3-2.8l423.9-423.9a9.96 9.96 0 000-14.1L694.9 114.9c-1.9-1.9-4.4-2.9-7.1-2.9s-5.2 1-7.1 2.9L256.8 538.8c-1.5 1.5-2.4 3.3-2.8 5.3l-29.5 168.2a33.5 33.5 0 009.4 29.8c6.6 6.4 14.9 9.9 23.8 9.9zm67.4-174.4L687.8 215l73.3 73.3-362.7 362.6-88.9 15.7 15.6-89zM880 836H144c-17.7 0-32 14.3-32 32v36c0 4.4 3.6 8 8 8h784c4.4 0 8-3.6 8-8v-36c0-17.7-14.3-32-32-32z"}}]},name:"edit",theme:"outlined"},Tq=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:Iq}))},Pq=d.forwardRef(Tq),Mq={icon:{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M864 170h-60c-4.4 0-8 3.6-8 8v518H310v-73c0-6.7-7.8-10.5-13-6.3l-141.9 112a8 8 0 000 12.6l141.9 112c5.3 4.2 13 .4 13-6.3v-75h498c35.3 0 64-28.7 64-64V178c0-4.4-3.6-8-8-8z"}}]},name:"enter",theme:"outlined"},Nq=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:Mq}))},Rq=d.forwardRef(Nq),Dq={icon:{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M553.1 509.1l-77.8 99.2-41.1-52.4a8 8 0 00-12.6 0l-99.8 127.2a7.98 7.98 0 006.3 12.9H696c6.7 0 10.4-7.7 6.3-12.9l-136.5-174a8.1 8.1 0 00-12.7 0zM360 442a40 40 0 1080 0 40 40 0 10-80 0zm494.6-153.4L639.4 73.4c-6-6-14.1-9.4-22.6-9.4H192c-17.7 0-32 14.3-32 32v832c0 17.7 14.3 32 32 32h640c17.7 0 32-14.3 32-32V311.3c0-8.5-3.4-16.7-9.4-22.7zM790.2 326H602V137.8L790.2 326zm1.8 562H232V136h302v216a42 42 0 0042 42h216v494z"}}]},name:"file-image",theme:"outlined"},jq=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:Dq}))},Lq=d.forwardRef(jq),Bq={icon:{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M854.6 288.6L639.4 73.4c-6-6-14.1-9.4-22.6-9.4H192c-17.7 0-32 14.3-32 32v832c0 17.7 14.3 32 32 32h640c17.7 0 32-14.3 32-32V311.3c0-8.5-3.4-16.7-9.4-22.7zM790.2 326H602V137.8L790.2 326zm1.8 562H232V136h302v216a42 42 0 0042 42h216v494z"}}]},name:"file",theme:"outlined"},Aq=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:Bq}))},tN=d.forwardRef(Aq),zq={icon:function(t,n){return{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M534 352V136H232v752h560V394H576a42 42 0 01-42-42z",fill:n}},{tag:"path",attrs:{d:"M854.6 288.6L639.4 73.4c-6-6-14.1-9.4-22.6-9.4H192c-17.7 0-32 14.3-32 32v832c0 17.7 14.3 32 32 32h640c17.7 0 32-14.3 32-32V311.3c0-8.5-3.4-16.7-9.4-22.7zM602 137.8L790.2 326H602V137.8zM792 888H232V136h302v216a42 42 0 0042 42h216v494z",fill:t}}]}},name:"file",theme:"twotone"},Hq=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:zq}))},Fq=d.forwardRef(Hq),_q={icon:{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M349 838c0 17.7 14.2 32 31.8 32h262.4c17.6 0 31.8-14.3 31.8-32V642H349v196zm531.1-684H143.9c-24.5 0-39.8 26.7-27.5 48l221.3 376h348.8l221.3-376c12.1-21.3-3.2-48-27.7-48z"}}]},name:"filter",theme:"filled"},Vq=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:_q}))},Wq=d.forwardRef(Vq),Uq={icon:{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M484 443.1V528h-84.5c-4.1 0-7.5 3.1-7.5 7v42c0 3.8 3.4 7 7.5 7H484v84.9c0 3.9 3.2 7.1 7 7.1h42c3.9 0 7-3.2 7-7.1V584h84.5c4.1 0 7.5-3.2 7.5-7v-42c0-3.9-3.4-7-7.5-7H540v-84.9c0-3.9-3.1-7.1-7-7.1h-42c-3.8 0-7 3.2-7 7.1zm396-144.7H521L403.7 186.2a8.15 8.15 0 00-5.5-2.2H144c-17.7 0-32 14.3-32 32v592c0 17.7 14.3 32 32 32h736c17.7 0 32-14.3 32-32V330.4c0-17.7-14.3-32-32-32zM840 768H184V256h188.5l119.6 114.4H840V768z"}}]},name:"folder-add",theme:"outlined"},Kq=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:Uq}))},qq=d.forwardRef(Kq),Xq={icon:{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M928 444H820V330.4c0-17.7-14.3-32-32-32H473L355.7 186.2a8.15 8.15 0 00-5.5-2.2H96c-17.7 0-32 14.3-32 32v592c0 17.7 14.3 32 32 32h698c13 0 24.8-7.9 29.7-20l134-332c1.5-3.8 2.3-7.9 2.3-12 0-17.7-14.3-32-32-32zM136 256h188.5l119.6 114.4H748V444H238c-13 0-24.8 7.9-29.7 20L136 643.2V256zm635.3 512H159l103.3-256h612.4L771.3 768z"}}]},name:"folder-open",theme:"outlined"},Gq=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:Xq}))},Yq=d.forwardRef(Gq),Qq={icon:{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M880 298.4H521L403.7 186.2a8.15 8.15 0 00-5.5-2.2H144c-17.7 0-32 14.3-32 32v592c0 17.7 14.3 32 32 32h736c17.7 0 32-14.3 32-32V330.4c0-17.7-14.3-32-32-32zM840 768H184V256h188.5l119.6 114.4H840V768z"}}]},name:"folder",theme:"outlined"},Zq=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:Qq}))},Jq=d.forwardRef(Zq),eX={icon:{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M300 276.5a56 56 0 1056-97 56 56 0 00-56 97zm0 284a56 56 0 1056-97 56 56 0 00-56 97zM640 228a56 56 0 10112 0 56 56 0 00-112 0zm0 284a56 56 0 10112 0 56 56 0 00-112 0zM300 844.5a56 56 0 1056-97 56 56 0 00-56 97zM640 796a56 56 0 10112 0 56 56 0 00-112 0z"}}]},name:"holder",theme:"outlined"},tX=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:eX}))},nX=d.forwardRef(tX),rX={icon:{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M328 544h368c4.4 0 8-3.6 8-8v-48c0-4.4-3.6-8-8-8H328c-4.4 0-8 3.6-8 8v48c0 4.4 3.6 8 8 8z"}},{tag:"path",attrs:{d:"M880 112H144c-17.7 0-32 14.3-32 32v736c0 17.7 14.3 32 32 32h736c17.7 0 32-14.3 32-32V144c0-17.7-14.3-32-32-32zm-40 728H184V184h656v656z"}}]},name:"minus-square",theme:"outlined"},oX=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:rX}))},iX=d.forwardRef(oX),aX={icon:{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M779.3 196.6c-94.2-94.2-247.6-94.2-341.7 0l-261 260.8c-1.7 1.7-2.6 4-2.6 6.4s.9 4.7 2.6 6.4l36.9 36.9a9 9 0 0012.7 0l261-260.8c32.4-32.4 75.5-50.2 121.3-50.2s88.9 17.8 121.2 50.2c32.4 32.4 50.2 75.5 50.2 121.2 0 45.8-17.8 88.8-50.2 121.2l-266 265.9-43.1 43.1c-40.3 40.3-105.8 40.3-146.1 0-19.5-19.5-30.2-45.4-30.2-73s10.7-53.5 30.2-73l263.9-263.8c6.7-6.6 15.5-10.3 24.9-10.3h.1c9.4 0 18.1 3.7 24.7 10.3 6.7 6.7 10.3 15.5 10.3 24.9 0 9.3-3.7 18.1-10.3 24.7L372.4 653c-1.7 1.7-2.6 4-2.6 6.4s.9 4.7 2.6 6.4l36.9 36.9a9 9 0 0012.7 0l215.6-215.6c19.9-19.9 30.8-46.3 30.8-74.4s-11-54.6-30.8-74.4c-41.1-41.1-107.9-41-149 0L463 364 224.8 602.1A172.22 172.22 0 00174 724.8c0 46.3 18.1 89.8 50.8 122.5 33.9 33.8 78.3 50.7 122.7 50.7 44.4 0 88.8-16.9 122.6-50.7l309.2-309C824.8 492.7 850 432 850 367.5c.1-64.6-25.1-125.3-70.7-170.9z"}}]},name:"paper-clip",theme:"outlined"},sX=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:aX}))},lX=d.forwardRef(sX),cX={icon:function(t,n){return{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M928 160H96c-17.7 0-32 14.3-32 32v640c0 17.7 14.3 32 32 32h832c17.7 0 32-14.3 32-32V192c0-17.7-14.3-32-32-32zm-40 632H136v-39.9l138.5-164.3 150.1 178L658.1 489 888 761.6V792zm0-129.8L664.2 396.8c-3.2-3.8-9-3.8-12.2 0L424.6 666.4l-144-170.7c-3.2-3.8-9-3.8-12.2 0L136 652.7V232h752v430.2z",fill:t}},{tag:"path",attrs:{d:"M424.6 765.8l-150.1-178L136 752.1V792h752v-30.4L658.1 489z",fill:n}},{tag:"path",attrs:{d:"M136 652.7l132.4-157c3.2-3.8 9-3.8 12.2 0l144 170.7L652 396.8c3.2-3.8 9-3.8 12.2 0L888 662.2V232H136v420.7zM304 280a88 88 0 110 176 88 88 0 010-176z",fill:n}},{tag:"path",attrs:{d:"M276 368a28 28 0 1056 0 28 28 0 10-56 0z",fill:n}},{tag:"path",attrs:{d:"M304 456a88 88 0 100-176 88 88 0 000 176zm0-116c15.5 0 28 12.5 28 28s-12.5 28-28 28-28-12.5-28-28 12.5-28 28-28z",fill:t}}]}},name:"picture",theme:"twotone"},uX=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:cX}))},dX=d.forwardRef(uX),fX={icon:{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M328 544h152v152c0 4.4 3.6 8 8 8h48c4.4 0 8-3.6 8-8V544h152c4.4 0 8-3.6 8-8v-48c0-4.4-3.6-8-8-8H544V328c0-4.4-3.6-8-8-8h-48c-4.4 0-8 3.6-8 8v152H328c-4.4 0-8 3.6-8 8v48c0 4.4 3.6 8 8 8z"}},{tag:"path",attrs:{d:"M880 112H144c-17.7 0-32 14.3-32 32v736c0 17.7 14.3 32 32 32h736c17.7 0 32-14.3 32-32V144c0-17.7-14.3-32-32-32zm-40 728H184V184h656v656z"}}]},name:"plus-square",theme:"outlined"},pX=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:fX}))},vX=d.forwardRef(pX),hX={icon:{tag:"svg",attrs:{viewBox:"64 64 896 896",focusable:"false"},children:[{tag:"path",attrs:{d:"M928 160H96c-17.7 0-32 14.3-32 32v640c0 17.7 14.3 32 32 32h832c17.7 0 32-14.3 32-32V192c0-17.7-14.3-32-32-32zm-40 208H676V232h212v136zm0 224H676V432h212v160zM412 432h200v160H412V432zm200-64H412V232h200v136zm-476 64h212v160H136V432zm0-200h212v136H136V232zm0 424h212v136H136V656zm276 0h200v136H412V656zm476 136H676V656h212v136z"}}]},name:"table",theme:"outlined"},gX=function(t,n){return d.createElement(en,$e({},t,{ref:n,icon:hX}))},mX=d.forwardRef(gX),ja={},Nd="rc-table-internal-hook";function qw(e){var t=d.createContext(void 0),n=function(o){var i=o.value,a=o.children,s=d.useRef(i);s.current=i;var c=d.useState(function(){return{getValue:function(){return s.current},listeners:new Set}}),u=ve(c,1),p=u[0];return sn(function(){pi.unstable_batchedUpdates(function(){p.listeners.forEach(function(v){v(i)})})},[i]),d.createElement(t.Provider,{value:p},a)};return{Context:t,Provider:n,defaultValue:e}}function Lr(e,t){var n=gn(typeof t=="function"?t:function(v){if(t===void 0)return v;if(!Array.isArray(t))return v[t];var h={};return t.forEach(function(m){h[m]=v[m]}),h}),r=d.useContext(e==null?void 0:e.Context),o=r||{},i=o.listeners,a=o.getValue,s=d.useRef();s.current=n(r?a():e==null?void 0:e.defaultValue);var c=d.useState({}),u=ve(c,2),p=u[1];return sn(function(){if(!r)return;function v(h){var m=n(h);zi(s.current,m,!0)||p({})}return i.add(v),function(){i.delete(v)}},[r]),s.current}function bX(){var e=d.createContext(null);function t(){return d.useContext(e)}function n(o,i){var a=vi(o),s=function(u,p){var v=a?{ref:p}:{},h=d.useRef(0),m=d.useRef(u),b=t();return b!==null?d.createElement(o,$e({},u,v)):((!i||i(m.current,u))&&(h.current+=1),m.current=u,d.createElement(e.Provider,{value:h.current},d.createElement(o,$e({},u,v))))};return a?d.forwardRef(s):s}function r(o,i){var a=vi(o),s=function(u,p){var v=a?{ref:p}:{};return t(),d.createElement(o,$e({},u,v))};return a?d.memo(d.forwardRef(s),i):d.memo(s,i)}return{makeImmutable:n,responseImmutable:r,useImmutableMark:t}}var Xw=bX(),nN=Xw.makeImmutable,gc=Xw.responseImmutable,yX=Xw.useImmutableMark,Yr=qw(),rN=d.createContext({renderWithProps:!1}),wX="RC_TABLE_KEY";function xX(e){return e==null?[]:Array.isArray(e)?e:[e]}function ih(e){var t=[],n={};return e.forEach(function(r){for(var o=r||{},i=o.key,a=o.dataIndex,s=i||xX(a).join("-")||wX;n[s];)s="".concat(s,"_next");n[s]=!0,t.push(s)}),t}function Lb(e){return e!=null}function SX(e){return typeof e=="number"&&!Number.isNaN(e)}function CX(e){return e&&st(e)==="object"&&!Array.isArray(e)&&!d.isValidElement(e)}function EX(e,t,n,r,o,i){var a=d.useContext(rN),s=yX(),c=Ls(function(){if(Lb(r))return[r];var u=t==null||t===""?[]:Array.isArray(t)?t:[t],p=bo(e,u),v=p,h=void 0;if(o){var m=o(p,e,n);CX(m)?(v=m.children,h=m.props,a.renderWithProps=!0):v=m}return[v,h]},[s,e,r,t,o,n],function(u,p){if(i){var v=ve(u,2),h=v[1],m=ve(p,2),b=m[1];return i(b,h)}return a.renderWithProps?!0:!zi(u,p,!0)});return c}function kX(e,t,n,r){var o=e+t-1;return e<=r&&o>=n}function OX(e,t){return Lr(Yr,function(n){var r=kX(e,t||1,n.hoverStartRow,n.hoverEndRow);return[r,n.onHover]})}var $X=function(t){var n=t.ellipsis,r=t.rowType,o=t.children,i,a=n===!0?{showTitle:!0}:n;return a&&(a.showTitle||r==="header")&&(typeof o=="string"||typeof o=="number"?i=o.toString():d.isValidElement(o)&&typeof o.props.children=="string"&&(i=o.props.children)),i};function IX(e){var t,n,r,o,i,a,s,c,u=e.component,p=e.children,v=e.ellipsis,h=e.scope,m=e.prefixCls,b=e.className,y=e.align,w=e.record,C=e.render,S=e.dataIndex,E=e.renderIndex,k=e.shouldCellUpdate,O=e.index,$=e.rowType,T=e.colSpan,M=e.rowSpan,P=e.fixLeft,R=e.fixRight,A=e.firstFixLeft,V=e.lastFixLeft,z=e.firstFixRight,B=e.lastFixRight,_=e.appendNode,H=e.additionalProps,j=H===void 0?{}:H,L=e.isSticky,F="".concat(m,"-cell"),U=Lr(Yr,["supportSticky","allColumnsFixedLeft","rowHoverable"]),D=U.supportSticky,W=U.allColumnsFixedLeft,G=U.rowHoverable,q=EX(w,S,E,p,C,k),J=ve(q,2),Y=J[0],Q=J[1],te={},ce=typeof P=="number"&&D,se=typeof R=="number"&&D;ce&&(te.position="sticky",te.left=P),se&&(te.position="sticky",te.right=R);var ne=(t=(n=(r=Q==null?void 0:Q.colSpan)!==null&&r!==void 0?r:j.colSpan)!==null&&n!==void 0?n:T)!==null&&t!==void 0?t:1,ae=(o=(i=(a=Q==null?void 0:Q.rowSpan)!==null&&a!==void 0?a:j.rowSpan)!==null&&i!==void 0?i:M)!==null&&o!==void 0?o:1,ee=OX(O,ae),re=ve(ee,2),le=re[0],pe=re[1],Oe=gn(function(Ie){var Le;w&&pe(O,O+ae-1),j==null||(Le=j.onMouseEnter)===null||Le===void 0||Le.call(j,Ie)}),ge=gn(function(Ie){var Le;w&&pe(-1,-1),j==null||(Le=j.onMouseLeave)===null||Le===void 0||Le.call(j,Ie)});if(ne===0||ae===0)return null;var Re=(s=j.title)!==null&&s!==void 0?s:$X({rowType:$,ellipsis:v,children:Y}),ye=ie(F,b,(c={},K(K(K(K(K(K(K(K(K(K(c,"".concat(F,"-fix-left"),ce&&D),"".concat(F,"-fix-left-first"),A&&D),"".concat(F,"-fix-left-last"),V&&D),"".concat(F,"-fix-left-all"),V&&W&&D),"".concat(F,"-fix-right"),se&&D),"".concat(F,"-fix-right-first"),z&&D),"".concat(F,"-fix-right-last"),B&&D),"".concat(F,"-ellipsis"),v),"".concat(F,"-with-append"),_),"".concat(F,"-fix-sticky"),(ce||se)&&L&&D),K(c,"".concat(F,"-row-hover"),!Q&&le)),j.className,Q==null?void 0:Q.className),Te={};y&&(Te.textAlign=y);var Ae=Z(Z(Z(Z({},te),j.style),Te),Q==null?void 0:Q.style),me=Y;return st(me)==="object"&&!Array.isArray(me)&&!d.isValidElement(me)&&(me=null),v&&(V||z)&&(me=d.createElement("span",{className:"".concat(F,"-content")},me)),d.createElement(u,$e({},Q,j,{className:ye,style:Ae,title:Re,scope:h,onMouseEnter:G?Oe:void 0,onMouseLeave:G?ge:void 0,colSpan:ne!==1?ne:null,rowSpan:ae!==1?ae:null}),_,me)}const mc=d.memo(IX);function Gw(e,t,n,r,o){var i=n[e]||{},a=n[t]||{},s,c;i.fixed==="left"?s=r.left[o==="rtl"?t:e]:a.fixed==="right"&&(c=r.right[o==="rtl"?e:t]);var u=!1,p=!1,v=!1,h=!1,m=n[t+1],b=n[e-1],y=m&&!m.fixed||b&&!b.fixed||n.every(function(k){return k.fixed==="left"});if(o==="rtl"){if(s!==void 0){var w=b&&b.fixed==="left";h=!w&&y}else if(c!==void 0){var C=m&&m.fixed==="right";v=!C&&y}}else if(s!==void 0){var S=m&&m.fixed==="left";u=!S&&y}else if(c!==void 0){var E=b&&b.fixed==="right";p=!E&&y}return{fixLeft:s,fixRight:c,lastFixLeft:u,firstFixRight:p,lastFixRight:v,firstFixLeft:h,isSticky:r.isSticky}}var oN=d.createContext({});function TX(e){var t=e.className,n=e.index,r=e.children,o=e.colSpan,i=o===void 0?1:o,a=e.rowSpan,s=e.align,c=Lr(Yr,["prefixCls","direction"]),u=c.prefixCls,p=c.direction,v=d.useContext(oN),h=v.scrollColumnIndex,m=v.stickyOffsets,b=v.flattenColumns,y=n+i-1,w=y+1===h?i+1:i,C=Gw(n,n+w-1,b,m,p);return d.createElement(mc,$e({className:t,index:n,component:"td",prefixCls:u,record:null,dataIndex:null,align:s,colSpan:w,rowSpan:a,render:function(){return r}},C))}var PX=["children"];function MX(e){var t=e.children,n=Mt(e,PX);return d.createElement("tr",n,t)}function ah(e){var t=e.children;return t}ah.Row=MX;ah.Cell=TX;function NX(e){var t=e.children,n=e.stickyOffsets,r=e.flattenColumns,o=Lr(Yr,"prefixCls"),i=r.length-1,a=r[i],s=d.useMemo(function(){return{stickyOffsets:n,flattenColumns:r,scrollColumnIndex:a!=null&&a.scrollbar?i:null}},[a,r,i,n]);return d.createElement(oN.Provider,{value:s},d.createElement("tfoot",{className:"".concat(o,"-summary")},t))}const tp=gc(NX);var iN=ah;function RX(e){return null}function DX(e){return null}function aN(e,t,n,r,o,i,a){e.push({record:t,indent:n,index:a});var s=i(t),c=o==null?void 0:o.has(s);if(t&&Array.isArray(t[r])&&c)for(var u=0;u1?A-1:0),z=1;z=1?O:""),style:Z(Z({},n),C==null?void 0:C.style)}),b.map(function(P,R){var A=P.render,V=P.dataIndex,z=P.className,B=uN(h,P,R,c,o),_=B.key,H=B.fixedInfo,j=B.appendCellNode,L=B.additionalCellProps;return d.createElement(mc,$e({className:z,ellipsis:P.ellipsis,align:P.align,scope:P.rowScope,component:P.rowScope?v:p,prefixCls:m,key:_,record:r,index:o,renderIndex:i,dataIndex:V,render:A,shouldCellUpdate:P.shouldCellUpdate},H,{appendNode:j,additionalProps:L}))})),T;if(E&&(k.current||S)){var M=w(r,o,c+1,S);T=d.createElement(cN,{expanded:S,className:ie("".concat(m,"-expanded-row"),"".concat(m,"-expanded-row-level-").concat(c+1),O),prefixCls:m,component:u,cellComponent:p,colSpan:b.length,isEmpty:!1},M)}return d.createElement(d.Fragment,null,$,T)}const LX=gc(jX);function BX(e){var t=e.columnKey,n=e.onColumnResize,r=d.useRef();return d.useEffect(function(){r.current&&n(t,r.current.offsetWidth)},[]),d.createElement(qo,{data:t},d.createElement("td",{ref:r,style:{padding:0,border:0,height:0}},d.createElement("div",{style:{height:0,overflow:"hidden"}}," ")))}function AX(e){var t=e.prefixCls,n=e.columnsKey,r=e.onColumnResize;return d.createElement("tr",{"aria-hidden":"true",className:"".concat(t,"-measure-row"),style:{height:0,fontSize:0}},d.createElement(qo.Collection,{onBatchResize:function(i){i.forEach(function(a){var s=a.data,c=a.size;r(s,c.offsetWidth)})}},n.map(function(o){return d.createElement(BX,{key:o,columnKey:o,onColumnResize:r})})))}function zX(e){var t=e.data,n=e.measureColumnWidth,r=Lr(Yr,["prefixCls","getComponent","onColumnResize","flattenColumns","getRowKey","expandedKeys","childrenColumnName","emptyNode"]),o=r.prefixCls,i=r.getComponent,a=r.onColumnResize,s=r.flattenColumns,c=r.getRowKey,u=r.expandedKeys,p=r.childrenColumnName,v=r.emptyNode,h=sN(t,p,u,c),m=d.useRef({renderWithProps:!1}),b=i(["body","wrapper"],"tbody"),y=i(["body","row"],"tr"),w=i(["body","cell"],"td"),C=i(["body","cell"],"th"),S;t.length?S=h.map(function(k,O){var $=k.record,T=k.indent,M=k.index,P=c($,O);return d.createElement(LX,{key:P,rowKey:P,record:$,index:O,renderIndex:M,rowComponent:y,cellComponent:w,scopeCellComponent:C,getRowKey:c,indent:T})}):S=d.createElement(cN,{expanded:!0,className:"".concat(o,"-placeholder"),prefixCls:o,component:y,cellComponent:w,colSpan:s.length,isEmpty:!0},v);var E=ih(s);return d.createElement(rN.Provider,{value:m.current},d.createElement(b,{className:"".concat(o,"-tbody")},n&&d.createElement(AX,{prefixCls:o,columnsKey:E,onColumnResize:a}),S))}const HX=gc(zX);var FX=["expandable"],$u="RC_TABLE_INTERNAL_COL_DEFINE";function _X(e){var t=e.expandable,n=Mt(e,FX),r;return"expandable"in e?r=Z(Z({},n),t):r=n,r.showExpandColumn===!1&&(r.expandIconColumnIndex=-1),r}var VX=["columnType"];function dN(e){for(var t=e.colWidths,n=e.columns,r=e.columCount,o=Lr(Yr,["tableLayout"]),i=o.tableLayout,a=[],s=r||n.length,c=!1,u=s-1;u>=0;u-=1){var p=t[u],v=n&&n[u],h=void 0,m=void 0;if(v&&(h=v[$u],i==="auto"&&(m=v.minWidth)),p||m||h||c){var b=h||{};b.columnType;var y=Mt(b,VX);a.unshift(d.createElement("col",$e({key:u,style:{width:p,minWidth:m}},y))),c=!0}}return d.createElement("colgroup",null,a)}var WX=["className","noData","columns","flattenColumns","colWidths","columCount","stickyOffsets","direction","fixHeader","stickyTopOffset","stickyBottomOffset","stickyClassName","onScroll","maxContentScroll","children"];function UX(e,t){return d.useMemo(function(){for(var n=[],r=0;r1?"colgroup":"col":null,ellipsis:w.ellipsis,align:w.align,component:a,prefixCls:p,key:m[y]},C,{additionalProps:S,rowType:"header"}))}))};function XX(e){var t=[];function n(a,s){var c=arguments.length>2&&arguments[2]!==void 0?arguments[2]:0;t[c]=t[c]||[];var u=s,p=a.filter(Boolean).map(function(v){var h={key:v.key,className:v.className||"",children:v.title,column:v,colStart:u},m=1,b=v.children;return b&&b.length>0&&(m=n(b,u,c+1).reduce(function(y,w){return y+w},0),h.hasSubColumns=!0),"colSpan"in v&&(m=v.colSpan),"rowSpan"in v&&(h.rowSpan=v.rowSpan),h.colSpan=m,h.colEnd=h.colStart+m-1,t[c].push(h),u+=m,m});return p}n(e,0);for(var r=t.length,o=function(s){t[s].forEach(function(c){!("rowSpan"in c)&&!c.hasSubColumns&&(c.rowSpan=r-s)})},i=0;i1&&arguments[1]!==void 0?arguments[1]:"";return typeof t=="number"?t:t.endsWith("%")?e*parseFloat(t)/100:null}function YX(e,t,n){return d.useMemo(function(){if(t&&t>0){var r=0,o=0;e.forEach(function(h){var m=Qk(t,h.width);m?r+=m:o+=1});var i=Math.max(t,n),a=Math.max(i-r,o),s=o,c=a/o,u=0,p=e.map(function(h){var m=Z({},h),b=Qk(t,m.width);if(b)m.width=b;else{var y=Math.floor(c);m.width=s===1?a:y,a-=y,s-=1}return u+=m.width,m});if(u0?Z(Z({},t),{},{children:fN(n)}):t})}function Bb(e){var t=arguments.length>1&&arguments[1]!==void 0?arguments[1]:"key";return e.filter(function(n){return n&&st(n)==="object"}).reduce(function(n,r,o){var i=r.fixed,a=i===!0?"left":i,s="".concat(t,"-").concat(o),c=r.children;return c&&c.length>0?[].concat(Se(n),Se(Bb(c,s).map(function(u){return Z({fixed:a},u)}))):[].concat(Se(n),[Z(Z({key:s},r),{},{fixed:a})])},[])}function JX(e){return e.map(function(t){var n=t.fixed,r=Mt(t,ZX),o=n;return n==="left"?o="right":n==="right"&&(o="left"),Z({fixed:o},r)})}function eG(e,t){var n=e.prefixCls,r=e.columns,o=e.children,i=e.expandable,a=e.expandedKeys,s=e.columnTitle,c=e.getRowKey,u=e.onTriggerExpand,p=e.expandIcon,v=e.rowExpandable,h=e.expandIconColumnIndex,m=e.direction,b=e.expandRowByClick,y=e.columnWidth,w=e.fixed,C=e.scrollWidth,S=e.clientWidth,E=d.useMemo(function(){var V=r||Yw(o)||[];return fN(V.slice())},[r,o]),k=d.useMemo(function(){if(i){var V=E.slice();if(!V.includes(ja)){var z=h||0;z>=0&&V.splice(z,0,ja)}var B=V.indexOf(ja);V=V.filter(function(L,F){return L!==ja||F===B});var _=E[B],H;(w==="left"||w)&&!h?H="left":(w==="right"||w)&&h===E.length?H="right":H=_?_.fixed:null;var j=K(K(K(K(K(K({},$u,{className:"".concat(n,"-expand-icon-col"),columnType:"EXPAND_COLUMN"}),"title",s),"fixed",H),"className","".concat(n,"-row-expand-icon-cell")),"width",y),"render",function(F,U,D){var W=c(U,D),G=a.has(W),q=v?v(U):!0,J=p({prefixCls:n,expanded:G,expandable:q,record:U,onExpand:u});return b?d.createElement("span",{onClick:function(Q){return Q.stopPropagation()}},J):J});return V.map(function(L){return L===ja?j:L})}return E.filter(function(L){return L!==ja})},[i,E,c,a,p,m]),O=d.useMemo(function(){var V=k;return t&&(V=t(V)),V.length||(V=[{render:function(){return null}}]),V},[t,k,m]),$=d.useMemo(function(){return m==="rtl"?JX(Bb(O)):Bb(O)},[O,m,C]),T=d.useMemo(function(){for(var V=-1,z=$.length-1;z>=0;z-=1){var B=$[z].fixed;if(B==="left"||B===!0){V=z;break}}if(V>=0)for(var _=0;_<=V;_+=1){var H=$[_].fixed;if(H!=="left"&&H!==!0)return!0}var j=$.findIndex(function(U){var D=U.fixed;return D==="right"});if(j>=0)for(var L=j;L<$.length;L+=1){var F=$[L].fixed;if(F!=="right")return!0}return!1},[$]),M=YX($,C,S),P=ve(M,2),R=P[0],A=P[1];return[O,R,A,T]}function tG(e){var t=e.prefixCls,n=e.record,r=e.onExpand,o=e.expanded,i=e.expandable,a="".concat(t,"-row-expand-icon");if(!i)return d.createElement("span",{className:ie(a,"".concat(t,"-row-spaced"))});var s=function(u){r(n,u),u.stopPropagation()};return d.createElement("span",{className:ie(a,K(K({},"".concat(t,"-row-expanded"),o),"".concat(t,"-row-collapsed"),!o)),onClick:s})}function nG(e,t,n){var r=[];function o(i){(i||[]).forEach(function(a,s){r.push(t(a,s)),o(a[n])})}return o(e),r}function rG(e,t,n){var r=_X(e),o=r.expandIcon,i=r.expandedRowKeys,a=r.defaultExpandedRowKeys,s=r.defaultExpandAllRows,c=r.expandedRowRender,u=r.onExpand,p=r.onExpandedRowsChange,v=r.childrenColumnName,h=o||tG,m=v||"children",b=d.useMemo(function(){return c?"row":e.expandable&&e.internalHooks===Nd&&e.expandable.__PARENT_RENDER_ICON__||t.some(function(O){return O&&st(O)==="object"&&O[m]})?"nest":!1},[!!c,t]),y=d.useState(function(){return a||(s?nG(t,n,m):[])}),w=ve(y,2),C=w[0],S=w[1],E=d.useMemo(function(){return new Set(i||C||[])},[i,C]),k=d.useCallback(function(O){var $=n(O,t.indexOf(O)),T,M=E.has($);M?(E.delete($),T=Se(E)):T=[].concat(Se(E),[$]),S(T),u&&u(!M,O),p&&p(T)},[n,E,t,u,p]);return[r,b,E,h,m,k]}function oG(e,t,n){var r=e.map(function(o,i){return Gw(i,i,e,t,n)});return Ls(function(){return r},[r],function(o,i){return!zi(o,i)})}function pN(e){var t=d.useRef(e),n=d.useState({}),r=ve(n,2),o=r[1],i=d.useRef(null),a=d.useRef([]);function s(c){a.current.push(c);var u=Promise.resolve();i.current=u,u.then(function(){if(i.current===u){var p=a.current,v=t.current;a.current=[],p.forEach(function(h){t.current=h(t.current)}),i.current=null,v!==t.current&&o({})}})}return d.useEffect(function(){return function(){i.current=null}},[]),[t.current,s]}function iG(e){var t=d.useRef(null),n=d.useRef();function r(){window.clearTimeout(n.current)}function o(a){t.current=a,r(),n.current=window.setTimeout(function(){t.current=null,n.current=void 0},100)}function i(){return t.current}return d.useEffect(function(){return r},[]),[o,i]}function aG(){var e=d.useState(-1),t=ve(e,2),n=t[0],r=t[1],o=d.useState(-1),i=ve(o,2),a=i[0],s=i[1],c=d.useCallback(function(u,p){r(u),s(p)},[]);return[n,a,c]}var Zk=$r()?window:null;function sG(e,t){var n=st(e)==="object"?e:{},r=n.offsetHeader,o=r===void 0?0:r,i=n.offsetSummary,a=i===void 0?0:i,s=n.offsetScroll,c=s===void 0?0:s,u=n.getContainer,p=u===void 0?function(){return Zk}:u,v=p()||Zk,h=!!e;return d.useMemo(function(){return{isSticky:h,stickyClassName:h?"".concat(t,"-sticky-holder"):"",offsetHeader:o,offsetSummary:a,offsetScroll:c,container:v}},[h,c,o,a,t,v])}function lG(e,t,n){var r=d.useMemo(function(){var o=t.length,i=function(u,p,v){for(var h=[],m=0,b=u;b!==p;b+=v)h.push(m),t[b].fixed&&(m+=e[b]||0);return h},a=i(0,o,1),s=i(o-1,-1,-1).reverse();return n==="rtl"?{left:s,right:a}:{left:a,right:s}},[e,t,n]);return r}function Jk(e){var t=e.className,n=e.children;return d.createElement("div",{className:t},n)}var cG=function(t,n){var r,o,i=t.scrollBodyRef,a=t.onScroll,s=t.offsetScroll,c=t.container,u=Lr(Yr,"prefixCls"),p=((r=i.current)===null||r===void 0?void 0:r.scrollWidth)||0,v=((o=i.current)===null||o===void 0?void 0:o.clientWidth)||0,h=p&&v*(v/p),m=d.useRef(),b=pN({scrollLeft:0,isHiddenScrollBar:!0}),y=ve(b,2),w=y[0],C=y[1],S=d.useRef({delta:0,x:0}),E=d.useState(!1),k=ve(E,2),O=k[0],$=k[1],T=d.useRef(null);d.useEffect(function(){return function(){bn.cancel(T.current)}},[]);var M=function(){$(!1)},P=function(B){B.persist(),S.current.delta=B.pageX-w.scrollLeft,S.current.x=0,$(!0),B.preventDefault()},R=function(B){var _,H=B||((_=window)===null||_===void 0?void 0:_.event),j=H.buttons;if(!O||j===0){O&&$(!1);return}var L=S.current.x+B.pageX-S.current.x-S.current.delta;L<=0&&(L=0),L+h>=v&&(L=v-h),a({scrollLeft:L/v*(p+2)}),S.current.x=B.pageX},A=function(){T.current=bn(function(){if(i.current){var B=Lk(i.current).top,_=B+i.current.offsetHeight,H=c===window?document.documentElement.scrollTop+window.innerHeight:Lk(c).top+c.clientHeight;_-mE()<=H||B>=H-s?C(function(j){return Z(Z({},j),{},{isHiddenScrollBar:!0})}):C(function(j){return Z(Z({},j),{},{isHiddenScrollBar:!1})})}})},V=function(B){C(function(_){return Z(Z({},_),{},{scrollLeft:B/p*v||0})})};return d.useImperativeHandle(n,function(){return{setScrollLeft:V,checkScrollBarVisible:A}}),d.useEffect(function(){var z=ep(document.body,"mouseup",M,!1),B=ep(document.body,"mousemove",R,!1);return A(),function(){z.remove(),B.remove()}},[h,O]),d.useEffect(function(){var z=ep(c,"scroll",A,!1),B=ep(window,"resize",A,!1);return function(){z.remove(),B.remove()}},[c]),d.useEffect(function(){w.isHiddenScrollBar||C(function(z){var B=i.current;return B?Z(Z({},z),{},{scrollLeft:B.scrollLeft/B.scrollWidth*B.clientWidth}):z})},[w.isHiddenScrollBar]),p<=v||!h||w.isHiddenScrollBar?null:d.createElement("div",{style:{height:mE(),width:v,bottom:s},className:"".concat(u,"-sticky-scroll")},d.createElement("div",{onMouseDown:P,ref:m,className:ie("".concat(u,"-sticky-scroll-bar"),K({},"".concat(u,"-sticky-scroll-bar-active"),O)),style:{width:"".concat(h,"px"),transform:"translate3d(".concat(w.scrollLeft,"px, 0, 0)")}}))};const uG=d.forwardRef(cG);var vN="rc-table",dG=[],fG={};function pG(){return"No Data"}function vG(e,t){var n=Z({rowKey:"key",prefixCls:vN,emptyText:pG},e),r=n.prefixCls,o=n.className,i=n.rowClassName,a=n.style,s=n.data,c=n.rowKey,u=n.scroll,p=n.tableLayout,v=n.direction,h=n.title,m=n.footer,b=n.summary,y=n.caption,w=n.id,C=n.showHeader,S=n.components,E=n.emptyText,k=n.onRow,O=n.onHeaderRow,$=n.onScroll,T=n.internalHooks,M=n.transformColumns,P=n.internalRefs,R=n.tailor,A=n.getContainerWidth,V=n.sticky,z=n.rowHoverable,B=z===void 0?!0:z,_=s||dG,H=!!_.length,j=T===Nd,L=d.useCallback(function(At,zt){return bo(S,At)||zt},[S]),F=d.useMemo(function(){return typeof c=="function"?c:function(At){var zt=At&&At[c];return zt}},[c]),U=L(["body"]),D=aG(),W=ve(D,3),G=W[0],q=W[1],J=W[2],Y=rG(n,_,F),Q=ve(Y,6),te=Q[0],ce=Q[1],se=Q[2],ne=Q[3],ae=Q[4],ee=Q[5],re=u==null?void 0:u.x,le=d.useState(0),pe=ve(le,2),Oe=pe[0],ge=pe[1],Re=eG(Z(Z(Z({},n),te),{},{expandable:!!te.expandedRowRender,columnTitle:te.columnTitle,expandedKeys:se,getRowKey:F,onTriggerExpand:ee,expandIcon:ne,expandIconColumnIndex:te.expandIconColumnIndex,direction:v,scrollWidth:j&&R&&typeof re=="number"?re:null,clientWidth:Oe}),j?M:null),ye=ve(Re,4),Te=ye[0],Ae=ye[1],me=ye[2],Ie=ye[3],Le=me??re,Be=d.useMemo(function(){return{columns:Te,flattenColumns:Ae}},[Te,Ae]),et=d.useRef(),rt=d.useRef(),Ze=d.useRef(),Ve=d.useRef();d.useImperativeHandle(t,function(){return{nativeElement:et.current,scrollTo:function(zt){var Vn;if(Ze.current instanceof HTMLElement){var En=zt.index,Pn=zt.top,xr=zt.key;if(SX(Pn)){var Bn;(Bn=Ze.current)===null||Bn===void 0||Bn.scrollTo({top:Pn})}else{var An,sr=xr??F(_[En]);(An=Ze.current.querySelector('[data-row-key="'.concat(sr,'"]')))===null||An===void 0||An.scrollIntoView()}}else(Vn=Ze.current)!==null&&Vn!==void 0&&Vn.scrollTo&&Ze.current.scrollTo(zt)}}});var Ye=d.useRef(),Ge=d.useState(!1),Fe=ve(Ge,2),we=Fe[0],ze=Fe[1],Me=d.useState(!1),Pe=ve(Me,2),Ke=Pe[0],St=Pe[1],Ft=pN(new Map),Lt=ve(Ft,2),Ct=Lt[0],Xt=Lt[1],Pt=ih(Ae),Gt=Pt.map(function(At){return Ct.get(At)}),ft=d.useMemo(function(){return Gt},[Gt.join("_")]),Je=lG(ft,Ae,v),He=u&&Lb(u.y),We=u&&Lb(Le)||!!te.fixed,Et=We&&Ae.some(function(At){var zt=At.fixed;return zt}),wt=d.useRef(),_e=sG(V,r),qe=_e.isSticky,ot=_e.offsetHeader,at=_e.offsetSummary,xt=_e.offsetScroll,_t=_e.stickyClassName,pt=_e.container,dt=d.useMemo(function(){return b==null?void 0:b(_)},[b,_]),$t=(He||qe)&&d.isValidElement(dt)&&dt.type===ah&&dt.props.fixed,kt,Kt,ln;He&&(Kt={overflowY:H?"scroll":"auto",maxHeight:u.y}),We&&(kt={overflowX:"auto"},He||(Kt={overflowY:"hidden"}),ln={width:Le===!0?"auto":Le,minWidth:"100%"});var Yt=d.useCallback(function(At,zt){xd(et.current)&&Xt(function(Vn){if(Vn.get(At)!==zt){var En=new Map(Vn);return En.set(At,zt),En}return Vn})},[]),un=iG(),ut=ve(un,2),lt=ut[0],gt=ut[1];function Qt(At,zt){zt&&(typeof zt=="function"?zt(At):zt.scrollLeft!==At&&(zt.scrollLeft=At,zt.scrollLeft!==At&&setTimeout(function(){zt.scrollLeft=At},0)))}var dn=gn(function(At){var zt=At.currentTarget,Vn=At.scrollLeft,En=v==="rtl",Pn=typeof Vn=="number"?Vn:zt.scrollLeft,xr=zt||fG;if(!gt()||gt()===xr){var Bn;lt(xr),Qt(Pn,rt.current),Qt(Pn,Ze.current),Qt(Pn,Ye.current),Qt(Pn,(Bn=wt.current)===null||Bn===void 0?void 0:Bn.setScrollLeft)}var An=zt||rt.current;if(An){var sr=typeof Le=="number"?Le:An.scrollWidth,Zr=An.clientWidth;if(sr===Zr){ze(!1),St(!1);return}En?(ze(-Pn0)):(ze(Pn>0),St(Pn1?w-B:0,H=Z(Z(Z({},M),u),{},{flex:"0 0 ".concat(B,"px"),width:"".concat(B,"px"),marginRight:_,pointerEvents:"auto"}),j=d.useMemo(function(){return v?V<=1:R===0||V===0||V>1},[V,R,v]);j?H.visibility="hidden":v&&(H.height=h==null?void 0:h(V));var L=j?function(){return null}:m,F={};return(V===0||R===0)&&(F.rowSpan=1,F.colSpan=1),d.createElement(mc,$e({className:ie(y,p),ellipsis:n.ellipsis,align:n.align,scope:n.rowScope,component:a,prefixCls:t.prefixCls,key:k,record:c,index:i,renderIndex:s,dataIndex:b,render:L,shouldCellUpdate:n.shouldCellUpdate},O,{appendNode:$,additionalProps:Z(Z({},T),{},{style:H},F)}))}var bG=["data","index","className","rowKey","style","extra","getHeight"],yG=d.forwardRef(function(e,t){var n=e.data,r=e.index,o=e.className,i=e.rowKey,a=e.style,s=e.extra,c=e.getHeight,u=Mt(e,bG),p=n.record,v=n.indent,h=n.index,m=Lr(Yr,["prefixCls","flattenColumns","fixColumn","componentWidth","scrollX"]),b=m.scrollX,y=m.flattenColumns,w=m.prefixCls,C=m.fixColumn,S=m.componentWidth,E=Lr(Qw,["getComponent"]),k=E.getComponent,O=lN(p,i,r,v),$=k(["body","row"],"div"),T=k(["body","cell"],"div"),M=O.rowSupportExpand,P=O.expanded,R=O.rowProps,A=O.expandedRowRender,V=O.expandedRowClassName,z;if(M&&P){var B=A(p,r,v+1,P),_=V==null?void 0:V(p,r,v),H={};C&&(H={style:K({},"--virtual-width","".concat(S,"px"))});var j="".concat(w,"-expanded-row-cell");z=d.createElement($,{className:ie("".concat(w,"-expanded-row"),"".concat(w,"-expanded-row-level-").concat(v+1),_)},d.createElement(mc,{component:T,prefixCls:w,className:ie(j,K({},"".concat(j,"-fixed"),C)),additionalProps:H},B))}var L=Z(Z({},a),{},{width:b});s&&(L.position="absolute",L.pointerEvents="none");var F=d.createElement($,$e({},R,u,{"data-row-key":i,ref:M?null:t,className:ie(o,"".concat(w,"-row"),R==null?void 0:R.className,K({},"".concat(w,"-row-extra"),s)),style:Z(Z({},L),R==null?void 0:R.style)}),y.map(function(U,D){return d.createElement(mG,{key:D,component:T,rowInfo:O,column:U,colIndex:D,indent:v,index:r,renderIndex:h,record:p,inverse:s,getHeight:c})}));return M?d.createElement("div",{ref:t},F,z):F}),eO=gc(yG),wG=d.forwardRef(function(e,t){var n=e.data,r=e.onScroll,o=Lr(Yr,["flattenColumns","onColumnResize","getRowKey","prefixCls","expandedKeys","childrenColumnName","scrollX"]),i=o.flattenColumns,a=o.onColumnResize,s=o.getRowKey,c=o.expandedKeys,u=o.prefixCls,p=o.childrenColumnName,v=o.scrollX,h=Lr(Qw),m=h.sticky,b=h.scrollY,y=h.listItemHeight,w=h.getComponent,C=h.onScroll,S=d.useRef(),E=sN(n,p,c,s),k=d.useMemo(function(){var V=0;return i.map(function(z){var B=z.width,_=z.key;return V+=B,[_,B,V]})},[i]),O=d.useMemo(function(){return k.map(function(V){return V[2]})},[k]);d.useEffect(function(){k.forEach(function(V){var z=ve(V,2),B=z[0],_=z[1];a(B,_)})},[k]),d.useImperativeHandle(t,function(){var V,z={scrollTo:function(_){var H;(H=S.current)===null||H===void 0||H.scrollTo(_)},nativeElement:(V=S.current)===null||V===void 0?void 0:V.nativeElement};return Object.defineProperty(z,"scrollLeft",{get:function(){var _;return((_=S.current)===null||_===void 0?void 0:_.getScrollInfo().x)||0},set:function(_){var H;(H=S.current)===null||H===void 0||H.scrollTo({left:_})}}),z});var $=function(z,B){var _,H=(_=E[B])===null||_===void 0?void 0:_.record,j=z.onCell;if(j){var L,F=j(H,B);return(L=F==null?void 0:F.rowSpan)!==null&&L!==void 0?L:1}return 1},T=function(z){var B=z.start,_=z.end,H=z.getSize,j=z.offsetY;if(_<0)return null;for(var L=i.filter(function(se){return $(se,B)===0}),F=B,U=function(ne){if(L=L.filter(function(ae){return $(ae,ne)===0}),!L.length)return F=ne,1},D=B;D>=0&&!U(D);D-=1);for(var W=i.filter(function(se){return $(se,_)!==1}),G=_,q=function(ne){if(W=W.filter(function(ae){return $(ae,ne)!==1}),!W.length)return G=Math.max(ne-1,_),1},J=_;J1})&&Y.push(ne)},te=F;te<=G;te+=1)Q(te);var ce=Y.map(function(se){var ne=E[se],ae=s(ne.record,se),ee=function(pe){var Oe=se+pe-1,ge=s(E[Oe].record,Oe),Re=H(ae,ge);return Re.bottom-Re.top},re=H(ae);return d.createElement(eO,{key:se,data:ne,rowKey:ae,index:se,style:{top:-j+re.top},extra:!0,getHeight:ee})});return ce},M=d.useMemo(function(){return{columnsOffset:O}},[O]),P="".concat(u,"-tbody"),R=w(["body","wrapper"]),A={};return m&&(A.position="sticky",A.bottom=0,st(m)==="object"&&m.offsetScroll&&(A.bottom=m.offsetScroll)),d.createElement(gN.Provider,{value:M},d.createElement(qv,{fullHeight:!1,ref:S,prefixCls:"".concat(P,"-virtual"),styles:{horizontalScrollBar:A},className:P,height:b,itemHeight:y||24,data:E,itemKey:function(z){return s(z.record)},component:R,scrollWidth:v,onVirtualScroll:function(z){var B,_=z.x;r({currentTarget:(B=S.current)===null||B===void 0?void 0:B.nativeElement,scrollLeft:_})},onScroll:C,extraRender:T},function(V,z,B){var _=s(V.record,z);return d.createElement(eO,{data:V,rowKey:_,index:z,style:B.style})}))}),xG=gc(wG),SG=function(t,n){var r=n.ref,o=n.onScroll;return d.createElement(xG,{ref:r,data:t,onScroll:o})};function CG(e,t){var n=e.data,r=e.columns,o=e.scroll,i=e.sticky,a=e.prefixCls,s=a===void 0?vN:a,c=e.className,u=e.listItemHeight,p=e.components,v=e.onScroll,h=o||{},m=h.x,b=h.y;typeof m!="number"&&(m=1),typeof b!="number"&&(b=500);var y=gn(function(S,E){return bo(p,S)||E}),w=gn(v),C=d.useMemo(function(){return{sticky:i,scrollY:b,listItemHeight:u,getComponent:y,onScroll:w}},[i,b,u,y,w]);return d.createElement(Qw.Provider,{value:C},d.createElement(bc,$e({},e,{className:ie(c,"".concat(s,"-virtual")),scroll:Z(Z({},o),{},{x:m}),components:Z(Z({},p),{},{body:n!=null&&n.length?SG:void 0}),columns:r,internalHooks:Nd,tailor:!0,ref:t})))}var EG=d.forwardRef(CG);function mN(e){return nN(EG,e)}mN();const kG=e=>null,OG=e=>null;var Zw=d.createContext(null),$G=function(t){for(var n=t.prefixCls,r=t.level,o=t.isStart,i=t.isEnd,a="".concat(n,"-indent-unit"),s=[],c=0;c=0&&n.splice(r,1),n}function Zi(e,t){var n=(e||[]).slice();return n.indexOf(t)===-1&&n.push(t),n}function Jw(e){return e.split("-")}function NG(e,t){var n=[],r=io(t,e);function o(){var i=arguments.length>0&&arguments[0]!==void 0?arguments[0]:[];i.forEach(function(a){var s=a.key,c=a.children;n.push(s),o(c)})}return o(r.children),n}function RG(e){if(e.parent){var t=Jw(e.pos);return Number(t[t.length-1])===e.parent.children.length-1}return!1}function DG(e){var t=Jw(e.pos);return Number(t[t.length-1])===0}function rO(e,t,n,r,o,i,a,s,c,u){var p,v=e.clientX,h=e.clientY,m=e.target.getBoundingClientRect(),b=m.top,y=m.height,w=(u==="rtl"?-1:1)*(((o==null?void 0:o.x)||0)-v),C=(w-12)/r,S=c.filter(function(H){var j;return(j=s[H])===null||j===void 0||(j=j.children)===null||j===void 0?void 0:j.length}),E=io(s,n.props.eventKey);if(h-1.5?i({dragNode:z,dropNode:B,dropPosition:1})?R=1:_=!1:i({dragNode:z,dropNode:B,dropPosition:0})?R=0:i({dragNode:z,dropNode:B,dropPosition:1})?R=1:_=!1:i({dragNode:z,dropNode:B,dropPosition:1})?R=1:_=!1,{dropPosition:R,dropLevelOffset:A,dropTargetKey:E.key,dropTargetPos:E.pos,dragOverNodeKey:P,dropContainerKey:R===0?null:((p=E.parent)===null||p===void 0?void 0:p.key)||null,dropAllowed:_}}function oO(e,t){if(e){var n=t.multiple;return n?e.slice():e.length?[e[0]]:e}}function _m(e){if(!e)return null;var t;if(Array.isArray(e))t={checkedKeys:e,halfCheckedKeys:void 0};else if(st(e)==="object")t={checkedKeys:e.checked||void 0,halfCheckedKeys:e.halfChecked||void 0};else return Fn(!1,"`checkedKeys` is not an array or an object"),null;return t}function Ab(e,t){var n=new Set;function r(o){if(!n.has(o)){var i=io(t,o);if(i){n.add(o);var a=i.parent,s=i.node;s.disabled||a&&r(a.key)}}}return(e||[]).forEach(function(o){r(o)}),Se(n)}function jG(e){const[t,n]=d.useState(null);return[d.useCallback((i,a,s)=>{const c=t??i,u=Math.min(c||0,i),p=Math.max(c||0,i),v=a.slice(u,p+1).map(b=>e(b)),h=v.some(b=>!s.has(b)),m=[];return v.forEach(b=>{h?(s.has(b)||m.push(b),s.add(b)):(s.delete(b),m.push(b))}),n(h?p:null),m},[t]),i=>{n(i)}]}const Na={},zb="SELECT_ALL",Hb="SELECT_INVERT",Fb="SELECT_NONE",iO=[],bN=(e,t)=>{let n=[];return(t||[]).forEach(r=>{n.push(r),r&&typeof r=="object"&&e in r&&(n=[].concat(Se(n),Se(bN(e,r[e]))))}),n},LG=(e,t)=>{const{preserveSelectedRowKeys:n,selectedRowKeys:r,defaultSelectedRowKeys:o,getCheckboxProps:i,onChange:a,onSelect:s,onSelectAll:c,onSelectInvert:u,onSelectNone:p,onSelectMultiple:v,columnWidth:h,type:m,selections:b,fixed:y,renderCell:w,hideSelectAll:C,checkStrictly:S=!0}=t||{},{prefixCls:E,data:k,pageData:O,getRecordByKey:$,getRowKey:T,expandType:M,childrenColumnName:P,locale:R,getPopupContainer:A}=e,V=As(),[z,B]=jG(ne=>ne),[_,H]=Dn(r||o||iO,{value:r}),j=d.useRef(new Map),L=d.useCallback(ne=>{if(n){const ae=new Map;ne.forEach(ee=>{let re=$(ee);!re&&j.current.has(ee)&&(re=j.current.get(ee)),ae.set(ee,re)}),j.current=ae}},[$,n]);d.useEffect(()=>{L(_)},[_]);const F=d.useMemo(()=>bN(P,O),[P,O]),{keyEntities:U}=d.useMemo(()=>{if(S)return{keyEntities:null};let ne=k;if(n){const ae=new Set(F.map((re,le)=>T(re,le))),ee=Array.from(j.current).reduce((re,le)=>{let[pe,Oe]=le;return ae.has(pe)?re:re.concat(Oe)},[]);ne=[].concat(Se(ne),Se(ee))}return nh(ne,{externalGetKey:T,childrenPropName:P})},[k,T,S,P,n,F]),D=d.useMemo(()=>{const ne=new Map;return F.forEach((ae,ee)=>{const re=T(ae,ee),le=(i?i(ae):null)||{};ne.set(re,le)}),ne},[F,T,i]),W=d.useCallback(ne=>{var ae;return!!(!((ae=D.get(T(ne)))===null||ae===void 0)&&ae.disabled)},[D,T]),[G,q]=d.useMemo(()=>{if(S)return[_||[],[]];const{checkedKeys:ne,halfCheckedKeys:ae}=ta(_,!0,U,W);return[ne||[],ae]},[_,S,U,W]),J=d.useMemo(()=>{const ne=m==="radio"?G.slice(0,1):G;return new Set(ne)},[G,m]),Y=d.useMemo(()=>m==="radio"?new Set:new Set(q),[q,m]);d.useEffect(()=>{t||H(iO)},[!!t]);const Q=d.useCallback((ne,ae)=>{let ee,re;L(ne),n?(ee=ne,re=ne.map(le=>j.current.get(le))):(ee=[],re=[],ne.forEach(le=>{const pe=$(le);pe!==void 0&&(ee.push(le),re.push(pe))})),H(ee),a==null||a(ee,re,{type:ae})},[H,$,a,n]),te=d.useCallback((ne,ae,ee,re)=>{if(s){const le=ee.map(pe=>$(pe));s($(ne),ae,le,re)}Q(ee,"single")},[s,$,Q]),ce=d.useMemo(()=>!b||C?null:(b===!0?[zb,Hb,Fb]:b).map(ae=>ae===zb?{key:"all",text:R.selectionAll,onSelect(){Q(k.map((ee,re)=>T(ee,re)).filter(ee=>{const re=D.get(ee);return!(re!=null&&re.disabled)||J.has(ee)}),"all")}}:ae===Hb?{key:"invert",text:R.selectInvert,onSelect(){const ee=new Set(J);O.forEach((le,pe)=>{const Oe=T(le,pe),ge=D.get(Oe);ge!=null&&ge.disabled||(ee.has(Oe)?ee.delete(Oe):ee.add(Oe))});const re=Array.from(ee);u&&(V.deprecated(!1,"onSelectInvert","onChange"),u(re)),Q(re,"invert")}}:ae===Fb?{key:"none",text:R.selectNone,onSelect(){p==null||p(),Q(Array.from(J).filter(ee=>{const re=D.get(ee);return re==null?void 0:re.disabled}),"none")}}:ae).map(ae=>Object.assign(Object.assign({},ae),{onSelect:function(){for(var ee,re,le=arguments.length,pe=new Array(le),Oe=0;Oe{var ae;if(!t)return ne.filter(Ve=>Ve!==Na);let ee=Se(ne);const re=new Set(J),le=F.map(T).filter(Ve=>!D.get(Ve).disabled),pe=le.every(Ve=>re.has(Ve)),Oe=le.some(Ve=>re.has(Ve)),ge=()=>{const Ve=[];pe?le.forEach(Ge=>{re.delete(Ge),Ve.push(Ge)}):le.forEach(Ge=>{re.has(Ge)||(re.add(Ge),Ve.push(Ge))});const Ye=Array.from(re);c==null||c(!pe,Ye.map(Ge=>$(Ge)),Ve.map(Ge=>$(Ge))),Q(Ye,"all"),B(null)};let Re,ye;if(m!=="radio"){let Ve;if(ce){const ze={getPopupContainer:A,items:ce.map((Me,Pe)=>{const{key:Ke,text:St,onSelect:Ft}=Me;return{key:Ke??Pe,onClick:()=>{Ft==null||Ft(le)},label:St}})};Ve=d.createElement("div",{className:`${E}-selection-extra`},d.createElement(Uw,{menu:ze,getPopupContainer:A},d.createElement("span",null,d.createElement(vP,null))))}const Ye=F.map((ze,Me)=>{const Pe=T(ze,Me),Ke=D.get(Pe)||{};return Object.assign({checked:re.has(Pe)},Ke)}).filter(ze=>{let{disabled:Me}=ze;return Me}),Ge=!!Ye.length&&Ye.length===F.length,Fe=Ge&&Ye.every(ze=>{let{checked:Me}=ze;return Me}),we=Ge&&Ye.some(ze=>{let{checked:Me}=ze;return Me});ye=d.createElement(Ns,{checked:Ge?Fe:!!F.length&&pe,indeterminate:Ge?!Fe&&we:!pe&&Oe,onChange:ge,disabled:F.length===0||Ge,"aria-label":Ve?"Custom selection":"Select all",skipGroup:!0}),Re=!C&&d.createElement("div",{className:`${E}-selection`},ye,Ve)}let Te;m==="radio"?Te=(Ve,Ye,Ge)=>{const Fe=T(Ye,Ge),we=re.has(Fe);return{node:d.createElement(Td,Object.assign({},D.get(Fe),{checked:we,onClick:ze=>ze.stopPropagation(),onChange:ze=>{re.has(Fe)||te(Fe,!0,[Fe],ze.nativeEvent)}})),checked:we}}:Te=(Ve,Ye,Ge)=>{var Fe;const we=T(Ye,Ge),ze=re.has(we),Me=Y.has(we),Pe=D.get(we);let Ke;return M==="nest"?Ke=Me:Ke=(Fe=Pe==null?void 0:Pe.indeterminate)!==null&&Fe!==void 0?Fe:Me,{node:d.createElement(Ns,Object.assign({},Pe,{indeterminate:Ke,checked:ze,skipGroup:!0,onClick:St=>St.stopPropagation(),onChange:St=>{let{nativeEvent:Ft}=St;const{shiftKey:Lt}=Ft,Ct=le.findIndex(Pt=>Pt===we),Xt=G.some(Pt=>le.includes(Pt));if(Lt&&S&&Xt){const Pt=z(Ct,le,re),Gt=Array.from(re);v==null||v(!ze,Gt.map(ft=>$(ft)),Pt.map(ft=>$(ft))),Q(Gt,"multiple")}else{const Pt=G;if(S){const Gt=ze?$i(Pt,we):Zi(Pt,we);te(we,!ze,Gt,Ft)}else{const Gt=ta([].concat(Se(Pt),[we]),!0,U,W),{checkedKeys:ft,halfCheckedKeys:Je}=Gt;let He=ft;if(ze){const We=new Set(ft);We.delete(we),He=ta(Array.from(We),{checked:!1,halfCheckedKeys:Je},U,W).checkedKeys}te(we,!ze,He,Ft)}}B(ze?null:Ct)}})),checked:ze}};const Ae=(Ve,Ye,Ge)=>{const{node:Fe,checked:we}=Te(Ve,Ye,Ge);return w?w(we,Ye,Ge,Fe):Fe};if(!ee.includes(Na))if(ee.findIndex(Ve=>{var Ye;return((Ye=Ve[$u])===null||Ye===void 0?void 0:Ye.columnType)==="EXPAND_COLUMN"})===0){const[Ve,...Ye]=ee;ee=[Ve,Na].concat(Se(Ye))}else ee=[Na].concat(Se(ee));const me=ee.indexOf(Na);ee=ee.filter((Ve,Ye)=>Ve!==Na||Ye===me);const Ie=ee[me-1],Le=ee[me+1];let Be=y;Be===void 0&&((Le==null?void 0:Le.fixed)!==void 0?Be=Le.fixed:(Ie==null?void 0:Ie.fixed)!==void 0&&(Be=Ie.fixed)),Be&&Ie&&((ae=Ie[$u])===null||ae===void 0?void 0:ae.columnType)==="EXPAND_COLUMN"&&Ie.fixed===void 0&&(Ie.fixed=Be);const et=ie(`${E}-selection-col`,{[`${E}-selection-col-with-dropdown`]:b&&m==="checkbox"}),rt=()=>t!=null&&t.columnTitle?typeof t.columnTitle=="function"?t.columnTitle(ye):t.columnTitle:Re,Ze={fixed:Be,width:h,className:`${E}-selection-column`,title:rt(),render:Ae,onCell:t.onCell,[$u]:{className:et}};return ee.map(Ve=>Ve===Na?Ze:Ve)},[T,F,t,G,J,Y,h,ce,M,D,v,te,W]),J]};function BG(e,t){return e._antProxy=e._antProxy||{},Object.keys(t).forEach(n=>{if(!(n in e._antProxy)){const r=e[n];e._antProxy[n]=r,e[n]=t[n]}}),e}function AG(e,t){return d.useImperativeHandle(e,()=>{const n=t(),{nativeElement:r}=n;return typeof Proxy<"u"?new Proxy(r,{get(o,i){return n[i]?n[i]:Reflect.get(o,i)}}):BG(r,n)})}function zG(e){return t=>{const{prefixCls:n,onExpand:r,record:o,expanded:i,expandable:a}=t,s=`${n}-row-expand-icon`;return d.createElement("button",{type:"button",onClick:c=>{r(o,c),c.stopPropagation()},className:ie(s,{[`${s}-spaced`]:!a,[`${s}-expanded`]:a&&i,[`${s}-collapsed`]:a&&!i}),"aria-label":i?e.collapse:e.expand,"aria-expanded":i})}}function HG(e){return(n,r)=>{const o=n.querySelector(`.${e}-container`);let i=r;if(o){const a=getComputedStyle(o),s=parseInt(a.borderLeftWidth,10),c=parseInt(a.borderRightWidth,10);i=r-s-c}return i}}const Ga=(e,t)=>"key"in e&&e.key!==void 0&&e.key!==null?e.key:e.dataIndex?Array.isArray(e.dataIndex)?e.dataIndex.join("."):e.dataIndex:t;function yc(e,t){return t?`${t}-${e}`:`${e}`}const sh=(e,t)=>typeof e=="function"?e(t):e,FG=(e,t)=>{const n=sh(e,t);return Object.prototype.toString.call(n)==="[object Object]"?"":n};function _G(e){const t=d.useRef(e),n=kw();return[()=>t.current,r=>{t.current=r,n()}]}function VG(e){var t=e.dropPosition,n=e.dropLevelOffset,r=e.indent,o={pointerEvents:"none",position:"absolute",right:0,backgroundColor:"red",height:2};switch(t){case-1:o.top=0,o.left=-n*r;break;case 1:o.bottom=0,o.left=-n*r;break;case 0:o.bottom=0,o.left=r;break}return d.createElement("div",{style:o})}function yN(e){if(e==null)throw new TypeError("Cannot destructure "+e)}function WG(e,t){var n=d.useState(!1),r=ve(n,2),o=r[0],i=r[1];sn(function(){if(o)return e(),function(){t()}},[o]),sn(function(){return i(!0),function(){i(!1)}},[])}var UG=["className","style","motion","motionNodes","motionType","onMotionStart","onMotionEnd","active","treeNodeRequiredProps"],wN=function(t,n){var r=t.className,o=t.style,i=t.motion,a=t.motionNodes,s=t.motionType,c=t.onMotionStart,u=t.onMotionEnd,p=t.active,v=t.treeNodeRequiredProps,h=Mt(t,UG),m=d.useState(!0),b=ve(m,2),y=b[0],w=b[1],C=d.useContext(Zw),S=C.prefixCls,E=a&&s!=="hide";sn(function(){a&&E!==y&&w(E)},[a]);var k=function(){a&&c()},O=d.useRef(!1),$=function(){a&&!O.current&&(O.current=!0,u())};WG(k,$);var T=function(P){E===P&&$()};return a?d.createElement(Xo,$e({ref:n,visible:y},i,{motionAppear:s==="show",onVisibleChanged:T}),function(M,P){var R=M.className,A=M.style;return d.createElement("div",{ref:P,className:ie("".concat(S,"-treenode-motion"),R),style:A},a.map(function(V){var z=Object.assign({},(yN(V.data),V.data)),B=V.title,_=V.key,H=V.isStart,j=V.isEnd;delete z.children;var L=ku(_,v);return d.createElement(tc,$e({},z,L,{title:B,active:p,data:V.data,key:_,isStart:H,isEnd:j}))}))}):d.createElement(tc,$e({domRef:n,className:r,style:o},h,{active:p}))};wN.displayName="MotionTreeNode";var KG=d.forwardRef(wN);function qG(){var e=arguments.length>0&&arguments[0]!==void 0?arguments[0]:[],t=arguments.length>1&&arguments[1]!==void 0?arguments[1]:[],n=e.length,r=t.length;if(Math.abs(n-r)!==1)return{add:!1,key:null};function o(i,a){var s=new Map;i.forEach(function(u){s.set(u,!0)});var c=a.filter(function(u){return!s.has(u)});return c.length===1?c[0]:null}return n ").concat(t);return t}var SN=d.forwardRef(function(e,t){var n=e.prefixCls,r=e.data;e.selectable,e.checkable;var o=e.expandedKeys,i=e.selectedKeys,a=e.checkedKeys,s=e.loadedKeys,c=e.loadingKeys,u=e.halfCheckedKeys,p=e.keyEntities,v=e.disabled,h=e.dragging,m=e.dragOverNodeKey,b=e.dropPosition,y=e.motion,w=e.height,C=e.itemHeight,S=e.virtual,E=e.focusable,k=e.activeItem,O=e.focused,$=e.tabIndex,T=e.onKeyDown,M=e.onFocus,P=e.onBlur,R=e.onActiveChange,A=e.onListChangeStart,V=e.onListChangeEnd,z=Mt(e,XG),B=d.useRef(null),_=d.useRef(null);d.useImperativeHandle(t,function(){return{scrollTo:function(Te){B.current.scrollTo(Te)},getIndentWidth:function(){return _.current.offsetWidth}}});var H=d.useState(o),j=ve(H,2),L=j[0],F=j[1],U=d.useState(r),D=ve(U,2),W=D[0],G=D[1],q=d.useState(r),J=ve(q,2),Y=J[0],Q=J[1],te=d.useState([]),ce=ve(te,2),se=ce[0],ne=ce[1],ae=d.useState(null),ee=ve(ae,2),re=ee[0],le=ee[1],pe=d.useRef(r);pe.current=r;function Oe(){var ye=pe.current;G(ye),Q(ye),ne([]),le(null),V()}sn(function(){F(o);var ye=qG(L,o);if(ye.key!==null)if(ye.add){var Te=W.findIndex(function(et){var rt=et.key;return rt===ye.key}),Ae=cO(aO(W,r,ye.key),S,w,C),me=W.slice();me.splice(Te+1,0,lO),Q(me),ne(Ae),le("show")}else{var Ie=r.findIndex(function(et){var rt=et.key;return rt===ye.key}),Le=cO(aO(r,W,ye.key),S,w,C),Be=r.slice();Be.splice(Ie+1,0,lO),Q(Be),ne(Le),le("hide")}else W!==r&&(G(r),Q(r))},[o,r]),d.useEffect(function(){h||Oe()},[h]);var ge=y?Y:r,Re={expandedKeys:o,selectedKeys:i,loadedKeys:s,loadingKeys:c,checkedKeys:a,halfCheckedKeys:u,dragOverNodeKey:m,dropPosition:b,keyEntities:p};return d.createElement(d.Fragment,null,O&&k&&d.createElement("span",{style:sO,"aria-live":"assertive"},YG(k)),d.createElement("div",null,d.createElement("input",{style:sO,disabled:E===!1||v,tabIndex:E!==!1?$:null,onKeyDown:T,onFocus:M,onBlur:P,value:"",onChange:GG,"aria-label":"for screen reader"})),d.createElement("div",{className:"".concat(n,"-treenode"),"aria-hidden":!0,style:{position:"absolute",pointerEvents:"none",visibility:"hidden",height:0,overflow:"hidden",border:0,padding:0}},d.createElement("div",{className:"".concat(n,"-indent")},d.createElement("div",{ref:_,className:"".concat(n,"-indent-unit")}))),d.createElement(qv,$e({},z,{data:ge,itemKey:uO,height:w,fullHeight:!1,virtual:S,itemHeight:C,prefixCls:"".concat(n,"-list"),ref:B,onVisibleChange:function(Te){Te.every(function(Ae){return uO(Ae)!==Rs})&&Oe()}}),function(ye){var Te=ye.pos,Ae=Object.assign({},(yN(ye.data),ye.data)),me=ye.title,Ie=ye.key,Le=ye.isStart,Be=ye.isEnd,et=Pd(Ie,Te);delete Ae.key,delete Ae.children;var rt=ku(et,Re);return d.createElement(KG,$e({},Ae,rt,{title:me,active:!!k&&Ie===k.key,pos:Te,data:ye.data,isStart:Le,isEnd:Be,motion:y,motionNodes:Ie===Rs?se:null,motionType:re,onMotionStart:A,onMotionEnd:Oe,treeNodeRequiredProps:Re,onMouseMove:function(){R(null)}}))}))});SN.displayName="NodeList";var QG=10,e1=function(e){Co(n,e);var t=Eo(n);function n(){var r;Kn(this,n);for(var o=arguments.length,i=new Array(o),a=0;a2&&arguments[2]!==void 0?arguments[2]:!1,v=r.state,h=v.dragChildrenKeys,m=v.dropPosition,b=v.dropTargetKey,y=v.dropTargetPos,w=v.dropAllowed;if(w){var C=r.props.onDrop;if(r.setState({dragOverNodeKey:null}),r.cleanDragState(),b!==null){var S=Z(Z({},ku(b,r.getTreeNodeRequiredProps())),{},{active:((u=r.getActiveItem())===null||u===void 0?void 0:u.key)===b,data:io(r.state.keyEntities,b).node}),E=h.indexOf(b)!==-1;Fn(!E,"Can not drop to dragNode's children node. This is a bug of rc-tree. Please report an issue.");var k=Jw(y),O={event:s,node:cr(S),dragNode:r.dragNode?cr(r.dragNode.props):null,dragNodesKeys:[r.dragNode.props.eventKey].concat(h),dropToGap:m!==0,dropPosition:m+Number(k[k.length-1])};p||C==null||C(O),r.dragNode=null}}}),K(Ne(r),"cleanDragState",function(){var s=r.state.draggingNodeKey;s!==null&&r.setState({draggingNodeKey:null,dropPosition:null,dropContainerKey:null,dropTargetKey:null,dropLevelOffset:null,dropAllowed:!0,dragOverNodeKey:null}),r.dragStartMousePosition=null,r.currentMouseOverDroppableNodeKey=null}),K(Ne(r),"triggerExpandActionExpand",function(s,c){var u=r.state,p=u.expandedKeys,v=u.flattenNodes,h=c.expanded,m=c.key,b=c.isLeaf;if(!(b||s.shiftKey||s.metaKey||s.ctrlKey)){var y=v.filter(function(C){return C.key===m})[0],w=cr(Z(Z({},ku(m,r.getTreeNodeRequiredProps())),{},{data:y.data}));r.setExpandedKeys(h?$i(p,m):Zi(p,m)),r.onNodeExpand(s,w)}}),K(Ne(r),"onNodeClick",function(s,c){var u=r.props,p=u.onClick,v=u.expandAction;v==="click"&&r.triggerExpandActionExpand(s,c),p==null||p(s,c)}),K(Ne(r),"onNodeDoubleClick",function(s,c){var u=r.props,p=u.onDoubleClick,v=u.expandAction;v==="doubleClick"&&r.triggerExpandActionExpand(s,c),p==null||p(s,c)}),K(Ne(r),"onNodeSelect",function(s,c){var u=r.state.selectedKeys,p=r.state,v=p.keyEntities,h=p.fieldNames,m=r.props,b=m.onSelect,y=m.multiple,w=c.selected,C=c[h.key],S=!w;S?y?u=Zi(u,C):u=[C]:u=$i(u,C);var E=u.map(function(k){var O=io(v,k);return O?O.node:null}).filter(function(k){return k});r.setUncontrolledState({selectedKeys:u}),b==null||b(u,{event:"select",selected:S,node:c,selectedNodes:E,nativeEvent:s.nativeEvent})}),K(Ne(r),"onNodeCheck",function(s,c,u){var p=r.state,v=p.keyEntities,h=p.checkedKeys,m=p.halfCheckedKeys,b=r.props,y=b.checkStrictly,w=b.onCheck,C=c.key,S,E={event:"check",node:c,checked:u,nativeEvent:s.nativeEvent};if(y){var k=u?Zi(h,C):$i(h,C),O=$i(m,C);S={checked:k,halfChecked:O},E.checkedNodes=k.map(function(A){return io(v,A)}).filter(function(A){return A}).map(function(A){return A.node}),r.setUncontrolledState({checkedKeys:k})}else{var $=ta([].concat(Se(h),[C]),!0,v),T=$.checkedKeys,M=$.halfCheckedKeys;if(!u){var P=new Set(T);P.delete(C);var R=ta(Array.from(P),{checked:!1,halfCheckedKeys:M},v);T=R.checkedKeys,M=R.halfCheckedKeys}S=T,E.checkedNodes=[],E.checkedNodesPositions=[],E.halfCheckedKeys=M,T.forEach(function(A){var V=io(v,A);if(V){var z=V.node,B=V.pos;E.checkedNodes.push(z),E.checkedNodesPositions.push({node:z,pos:B})}}),r.setUncontrolledState({checkedKeys:T},!1,{halfCheckedKeys:M})}w==null||w(S,E)}),K(Ne(r),"onNodeLoad",function(s){var c,u=s.key,p=r.state.keyEntities,v=io(p,u);if(!(v!=null&&(c=v.children)!==null&&c!==void 0&&c.length)){var h=new Promise(function(m,b){r.setState(function(y){var w=y.loadedKeys,C=w===void 0?[]:w,S=y.loadingKeys,E=S===void 0?[]:S,k=r.props,O=k.loadData,$=k.onLoad;if(!O||C.indexOf(u)!==-1||E.indexOf(u)!==-1)return null;var T=O(s);return T.then(function(){var M=r.state.loadedKeys,P=Zi(M,u);$==null||$(P,{event:"load",node:s}),r.setUncontrolledState({loadedKeys:P}),r.setState(function(R){return{loadingKeys:$i(R.loadingKeys,u)}}),m()}).catch(function(M){if(r.setState(function(R){return{loadingKeys:$i(R.loadingKeys,u)}}),r.loadingRetryTimes[u]=(r.loadingRetryTimes[u]||0)+1,r.loadingRetryTimes[u]>=QG){var P=r.state.loadedKeys;Fn(!1,"Retry for `loadData` many times but still failed. No more retry."),r.setUncontrolledState({loadedKeys:Zi(P,u)}),m()}b(M)}),{loadingKeys:Zi(E,u)}})});return h.catch(function(){}),h}}),K(Ne(r),"onNodeMouseEnter",function(s,c){var u=r.props.onMouseEnter;u==null||u({event:s,node:c})}),K(Ne(r),"onNodeMouseLeave",function(s,c){var u=r.props.onMouseLeave;u==null||u({event:s,node:c})}),K(Ne(r),"onNodeContextMenu",function(s,c){var u=r.props.onRightClick;u&&(s.preventDefault(),u({event:s,node:c}))}),K(Ne(r),"onFocus",function(){var s=r.props.onFocus;r.setState({focused:!0});for(var c=arguments.length,u=new Array(c),p=0;p1&&arguments[1]!==void 0?arguments[1]:!1,u=arguments.length>2&&arguments[2]!==void 0?arguments[2]:null;if(!r.destroyed){var p=!1,v=!0,h={};Object.keys(s).forEach(function(m){if(m in r.props){v=!1;return}p=!0,h[m]=s[m]}),p&&(!c||v)&&r.setState(Z(Z({},h),u))}}),K(Ne(r),"scrollTo",function(s){r.listRef.current.scrollTo(s)}),r}return qn(n,[{key:"componentDidMount",value:function(){this.destroyed=!1,this.onUpdated()}},{key:"componentDidUpdate",value:function(){this.onUpdated()}},{key:"onUpdated",value:function(){var o=this.props,i=o.activeKey,a=o.itemScrollOffset,s=a===void 0?0:a;i!==void 0&&i!==this.state.activeKey&&(this.setState({activeKey:i}),i!==null&&this.scrollTo({key:i,offset:s}))}},{key:"componentWillUnmount",value:function(){window.removeEventListener("dragend",this.onWindowDragEnd),this.destroyed=!0}},{key:"resetDragState",value:function(){this.setState({dragOverNodeKey:null,dropPosition:null,dropLevelOffset:null,dropTargetKey:null,dropContainerKey:null,dropTargetPos:null,dropAllowed:!1})}},{key:"render",value:function(){var o=this.state,i=o.focused,a=o.flattenNodes,s=o.keyEntities,c=o.draggingNodeKey,u=o.activeKey,p=o.dropLevelOffset,v=o.dropContainerKey,h=o.dropTargetKey,m=o.dropPosition,b=o.dragOverNodeKey,y=o.indent,w=this.props,C=w.prefixCls,S=w.className,E=w.style,k=w.showLine,O=w.focusable,$=w.tabIndex,T=$===void 0?0:$,M=w.selectable,P=w.showIcon,R=w.icon,A=w.switcherIcon,V=w.draggable,z=w.checkable,B=w.checkStrictly,_=w.disabled,H=w.motion,j=w.loadData,L=w.filterTreeNode,F=w.height,U=w.itemHeight,D=w.virtual,W=w.titleRender,G=w.dropIndicatorRender,q=w.onContextMenu,J=w.onScroll,Y=w.direction,Q=w.rootClassName,te=w.rootStyle,ce=Gr(this.props,{aria:!0,data:!0}),se;return V&&(st(V)==="object"?se=V:typeof V=="function"?se={nodeDraggable:V}:se={}),d.createElement(Zw.Provider,{value:{prefixCls:C,selectable:M,showIcon:P,icon:R,switcherIcon:A,draggable:se,draggingNodeKey:c,checkable:z,checkStrictly:B,disabled:_,keyEntities:s,dropLevelOffset:p,dropContainerKey:v,dropTargetKey:h,dropPosition:m,dragOverNodeKey:b,indent:y,direction:Y,dropIndicatorRender:G,loadData:j,filterTreeNode:L,titleRender:W,onNodeClick:this.onNodeClick,onNodeDoubleClick:this.onNodeDoubleClick,onNodeExpand:this.onNodeExpand,onNodeSelect:this.onNodeSelect,onNodeCheck:this.onNodeCheck,onNodeLoad:this.onNodeLoad,onNodeMouseEnter:this.onNodeMouseEnter,onNodeMouseLeave:this.onNodeMouseLeave,onNodeContextMenu:this.onNodeContextMenu,onNodeDragStart:this.onNodeDragStart,onNodeDragEnter:this.onNodeDragEnter,onNodeDragOver:this.onNodeDragOver,onNodeDragLeave:this.onNodeDragLeave,onNodeDragEnd:this.onNodeDragEnd,onNodeDrop:this.onNodeDrop}},d.createElement("div",{role:"tree",className:ie(C,S,Q,K(K(K({},"".concat(C,"-show-line"),k),"".concat(C,"-focused"),i),"".concat(C,"-active-focused"),u!==null)),style:te},d.createElement(SN,$e({ref:this.listRef,prefixCls:C,style:E,data:a,disabled:_,selectable:M,checkable:!!z,motion:H,dragging:c!==null,height:F,itemHeight:U,virtual:D,focusable:O,focused:i,tabIndex:T,activeItem:this.getActiveItem(),onFocus:this.onFocus,onBlur:this.onBlur,onKeyDown:this.onKeyDown,onActiveChange:this.onActiveChange,onListChangeStart:this.onListChangeStart,onListChangeEnd:this.onListChangeEnd,onContextMenu:q,onScroll:J},this.getTreeNodeRequiredProps(),ce))))}}],[{key:"getDerivedStateFromProps",value:function(o,i){var a=i.prevProps,s={prevProps:o};function c(T){return!a&&T in o||a&&a[T]!==o[T]}var u,p=i.fieldNames;if(c("fieldNames")&&(p=ec(o.fieldNames),s.fieldNames=p),c("treeData")?u=o.treeData:c("children")&&(Fn(!1,"`children` of Tree is deprecated. Please use `treeData` instead."),u=iM(o.children)),u){s.treeData=u;var v=nh(u,{fieldNames:p});s.keyEntities=Z(K({},Rs,xN),v.keyEntities)}var h=s.keyEntities||i.keyEntities;if(c("expandedKeys")||a&&c("autoExpandParent"))s.expandedKeys=o.autoExpandParent||!a&&o.defaultExpandParent?Ab(o.expandedKeys,h):o.expandedKeys;else if(!a&&o.defaultExpandAll){var m=Z({},h);delete m[Rs];var b=[];Object.keys(m).forEach(function(T){var M=m[T];M.children&&M.children.length&&b.push(M.key)}),s.expandedKeys=b}else!a&&o.defaultExpandedKeys&&(s.expandedKeys=o.autoExpandParent||o.defaultExpandParent?Ab(o.defaultExpandedKeys,h):o.defaultExpandedKeys);if(s.expandedKeys||delete s.expandedKeys,u||s.expandedKeys){var y=Dm(u||i.treeData,s.expandedKeys||i.expandedKeys,p);s.flattenNodes=y}if(o.selectable&&(c("selectedKeys")?s.selectedKeys=oO(o.selectedKeys,o):!a&&o.defaultSelectedKeys&&(s.selectedKeys=oO(o.defaultSelectedKeys,o))),o.checkable){var w;if(c("checkedKeys")?w=_m(o.checkedKeys)||{}:!a&&o.defaultCheckedKeys?w=_m(o.defaultCheckedKeys)||{}:u&&(w=_m(o.checkedKeys)||{checkedKeys:i.checkedKeys,halfCheckedKeys:i.halfCheckedKeys}),w){var C=w,S=C.checkedKeys,E=S===void 0?[]:S,k=C.halfCheckedKeys,O=k===void 0?[]:k;if(!o.checkStrictly){var $=ta(E,!0,h);E=$.checkedKeys,O=$.halfCheckedKeys}s.checkedKeys=E,s.halfCheckedKeys=O}}return c("loadedKeys")&&(s.loadedKeys=o.loadedKeys),s}}]),n}(d.Component);K(e1,"defaultProps",{prefixCls:"rc-tree",showLine:!1,showIcon:!0,selectable:!0,multiple:!1,checkable:!1,disabled:!1,checkStrictly:!1,draggable:!1,defaultExpandParent:!0,autoExpandParent:!1,defaultExpandAll:!1,defaultExpandedKeys:[],defaultCheckedKeys:[],defaultSelectedKeys:[],dropIndicatorRender:VG,allowDrop:function(){return!0},expandAction:!1});K(e1,"TreeNode",tc);const ZG=e=>{let{treeCls:t,treeNodeCls:n,directoryNodeSelectedBg:r,directoryNodeSelectedColor:o,motionDurationMid:i,borderRadius:a,controlItemBgHover:s}=e;return{[`${t}${t}-directory ${n}`]:{[`${t}-node-content-wrapper`]:{position:"static",[`> *:not(${t}-drop-indicator)`]:{position:"relative"},"&:hover":{background:"transparent"},"&:before":{position:"absolute",inset:0,transition:`background-color ${i}`,content:'""',borderRadius:a},"&:hover:before":{background:s}},[`${t}-switcher`]:{marginInlineEnd:0},"&-selected":{[`${t}-switcher, ${t}-draggable-icon`]:{color:o,zIndex:1},[`${t}-node-content-wrapper`]:{color:o,background:"transparent","&:before, &:hover:before":{background:r}}}}}},JG=new fn("ant-tree-node-fx-do-not-use",{"0%":{opacity:0},"100%":{opacity:1}}),eY=(e,t)=>({[`.${e}-switcher-icon`]:{display:"inline-block",fontSize:10,verticalAlign:"baseline",svg:{transition:`transform ${t.motionDurationSlow}`}}}),tY=(e,t)=>({[`.${e}-drop-indicator`]:{position:"absolute",zIndex:1,height:2,backgroundColor:t.colorPrimary,borderRadius:1,pointerEvents:"none","&:after":{position:"absolute",top:-3,insetInlineStart:-6,width:8,height:8,backgroundColor:"transparent",border:`${de(t.lineWidthBold)} solid ${t.colorPrimary}`,borderRadius:"50%",content:'""'}}}),nY=(e,t)=>{const{treeCls:n,treeNodeCls:r,treeNodePadding:o,titleHeight:i,nodeSelectedBg:a,nodeHoverBg:s,colorTextQuaternary:c}=t,u=t.marginXXS;return{[n]:Object.assign(Object.assign({},jn(t)),{background:t.colorBgContainer,borderRadius:t.borderRadius,transition:`background-color ${t.motionDurationSlow}`,"&-rtl":{direction:"rtl"},[`&${n}-rtl ${n}-switcher_close ${n}-switcher-icon svg`]:{transform:"rotate(90deg)"},[`&-focused:not(:hover):not(${n}-active-focused)`]:Object.assign({},qa(t)),[`${n}-list-holder-inner`]:{alignItems:"flex-start"},[`&${n}-block-node`]:{[`${n}-list-holder-inner`]:{alignItems:"stretch",[`${n}-node-content-wrapper`]:{flex:"auto"},[`${r}.dragging:after`]:{position:"absolute",inset:0,border:`1px solid ${t.colorPrimary}`,opacity:0,animationName:JG,animationDuration:t.motionDurationSlow,animationPlayState:"running",animationFillMode:"forwards",content:'""',pointerEvents:"none",borderRadius:t.borderRadius}}},[r]:{display:"flex",alignItems:"flex-start",marginBottom:o,lineHeight:de(i),position:"relative","&:before":{content:'""',position:"absolute",zIndex:1,insetInlineStart:0,width:"100%",top:"100%",height:o},[`&-disabled ${n}-node-content-wrapper`]:{color:t.colorTextDisabled,cursor:"not-allowed","&:hover":{background:"transparent"}},[`&-active ${n}-node-content-wrapper`]:{background:t.controlItemBgHover},[`&:not(${r}-disabled).filter-node ${n}-title`]:{color:t.colorPrimary,fontWeight:500},"&-draggable":{cursor:"grab",[`${n}-draggable-icon`]:{flexShrink:0,width:i,textAlign:"center",visibility:"visible",color:c},[`&${r}-disabled ${n}-draggable-icon`]:{visibility:"hidden"}}},[`${n}-indent`]:{alignSelf:"stretch",whiteSpace:"nowrap",userSelect:"none","&-unit":{display:"inline-block",width:i}},[`${n}-draggable-icon`]:{visibility:"hidden"},[`${n}-switcher`]:Object.assign(Object.assign({},eY(e,t)),{position:"relative",flex:"none",alignSelf:"stretch",width:i,margin:0,textAlign:"center",cursor:"pointer",userSelect:"none",transition:`all ${t.motionDurationSlow}`,marginInlineEnd:t.calc(t.calc(i).sub(t.controlInteractiveSize)).div(2).equal(),"&-noop":{cursor:"unset"},"&:before":{pointerEvents:"none",content:'""',width:i,height:i,position:"absolute",left:{_skip_check_:!0,value:0},top:0,borderRadius:t.borderRadius,transition:`all ${t.motionDurationSlow}`},[`&:not(${n}-switcher-noop):hover:before`]:{backgroundColor:t.colorBgTextHover},[`&_close ${n}-switcher-icon svg`]:{transform:"rotate(-90deg)"},"&-loading-icon":{color:t.colorPrimary},"&-leaf-line":{position:"relative",zIndex:1,display:"inline-block",width:"100%",height:"100%","&:before":{position:"absolute",top:0,insetInlineEnd:t.calc(i).div(2).equal(),bottom:t.calc(o).mul(-1).equal(),marginInlineStart:-1,borderInlineEnd:`1px solid ${t.colorBorder}`,content:'""'},"&:after":{position:"absolute",width:t.calc(t.calc(i).div(2).equal()).mul(.8).equal(),height:t.calc(i).div(2).equal(),borderBottom:`1px solid ${t.colorBorder}`,content:'""'}}}),[`${n}-checkbox`]:{top:"initial",marginInlineEnd:u,alignSelf:"flex-start",marginTop:t.marginXXS},[`${n}-node-content-wrapper`]:Object.assign(Object.assign({position:"relative",minHeight:i,padding:`0 ${t.paddingXS}`,background:"transparent",borderRadius:t.borderRadius,cursor:"pointer",transition:`all ${t.motionDurationMid}, border 0s, line-height 0s, box-shadow 0s`},tY(e,t)),{"&:hover":{backgroundColor:s},[`&${n}-node-selected`]:{backgroundColor:a},[`${n}-iconEle`]:{display:"inline-block",width:i,height:i,textAlign:"center",verticalAlign:"top","&:empty":{display:"none"}}}),[`${n}-unselectable ${n}-node-content-wrapper:hover`]:{backgroundColor:"transparent"},[`${r}.drop-container > [draggable]`]:{boxShadow:`0 0 0 2px ${t.colorPrimary}`},"&-show-line":{[`${n}-indent-unit`]:{position:"relative",height:"100%","&:before":{position:"absolute",top:0,insetInlineEnd:t.calc(i).div(2).equal(),bottom:t.calc(o).mul(-1).equal(),borderInlineEnd:`1px solid ${t.colorBorder}`,content:'""'},"&-end:before":{display:"none"}},[`${n}-switcher`]:{background:"transparent","&-line-icon":{verticalAlign:"-0.15em"}}},[`${r}-leaf-last ${n}-switcher-leaf-line:before`]:{top:"auto !important",bottom:"auto !important",height:`${de(t.calc(i).div(2).equal())} !important`}})}},rY=(e,t)=>{const n=`.${e}`,r=`${n}-treenode`,o=t.calc(t.paddingXS).div(2).equal(),i=vn(t,{treeCls:n,treeNodeCls:r,treeNodePadding:o});return[nY(e,i),ZG(i)]},oY=e=>{const{controlHeightSM:t}=e;return{titleHeight:t,nodeHoverBg:e.controlItemBgHover,nodeSelectedBg:e.controlItemBgActive}},iY=e=>{const{colorTextLightSolid:t,colorPrimary:n}=e;return Object.assign(Object.assign({},oY(e)),{directoryNodeSelectedColor:t,directoryNodeSelectedBg:n})},aY=In("Tree",(e,t)=>{let{prefixCls:n}=t;return[{[e.componentCls]:Vw(`${n}-checkbox`,e)},rY(n,e),zv(e)]},iY),dO=4;function sY(e){const{dropPosition:t,dropLevelOffset:n,prefixCls:r,indent:o,direction:i="ltr"}=e,a=i==="ltr"?"left":"right",s=i==="ltr"?"right":"left",c={[a]:-n*o+dO,[s]:0};switch(t){case-1:c.top=-3;break;case 1:c.bottom=-3;break;default:c.bottom=-3,c[a]=o+dO;break}return ue.createElement("div",{style:c,className:`${r}-drop-indicator`})}const lY=e=>{const{prefixCls:t,switcherIcon:n,treeNodeProps:r,showLine:o,switcherLoadingIcon:i}=e,{isLeaf:a,expanded:s,loading:c}=r;if(c)return d.isValidElement(i)?i:d.createElement(Xa,{className:`${t}-switcher-loading-icon`});let u;if(o&&typeof o=="object"&&(u=o.showLeafIcon),a){if(!o)return null;if(typeof u!="boolean"&&u){const h=typeof u=="function"?u(r):u,m=`${t}-switcher-line-custom-icon`;return d.isValidElement(h)?Dr(h,{className:ie(h.props.className||"",m)}):h}return u?d.createElement(tN,{className:`${t}-switcher-line-icon`}):d.createElement("span",{className:`${t}-switcher-leaf-line`})}const p=`${t}-switcher-icon`,v=typeof n=="function"?n(r):n;return d.isValidElement(v)?Dr(v,{className:ie(v.props.className||"",p)}):v!==void 0?v:o?s?d.createElement(iX,{className:`${t}-switcher-line-icon`}):d.createElement(vX,{className:`${t}-switcher-line-icon`}):d.createElement(uq,{className:p})},CN=ue.forwardRef((e,t)=>{var n;const{getPrefixCls:r,direction:o,virtual:i,tree:a}=ue.useContext(ht),{prefixCls:s,className:c,showIcon:u=!1,showLine:p,switcherIcon:v,switcherLoadingIcon:h,blockNode:m=!1,children:b,checkable:y=!1,selectable:w=!0,draggable:C,motion:S,style:E}=e,k=r("tree",s),O=r(),$=S??Object.assign(Object.assign({},Ju(O)),{motionAppear:!1}),T=Object.assign(Object.assign({},e),{checkable:y,selectable:w,showIcon:u,motion:$,blockNode:m,showLine:!!p,dropIndicatorRender:sY}),[M,P,R]=aY(k),[,A]=Ir(),V=A.paddingXS/2+(((n=A.Tree)===null||n===void 0?void 0:n.titleHeight)||A.controlHeightSM),z=ue.useMemo(()=>{if(!C)return!1;let _={};switch(typeof C){case"function":_.nodeDraggable=C;break;case"object":_=Object.assign({},C);break}return _.icon!==!1&&(_.icon=_.icon||ue.createElement(nX,null)),_},[C]),B=_=>ue.createElement(lY,{prefixCls:k,switcherIcon:v,switcherLoadingIcon:h,treeNodeProps:_,showLine:p});return M(ue.createElement(e1,Object.assign({itemHeight:V,ref:t,virtual:i},T,{style:Object.assign(Object.assign({},a==null?void 0:a.style),E),prefixCls:k,className:ie({[`${k}-icon-hide`]:!u,[`${k}-block-node`]:m,[`${k}-unselectable`]:!w,[`${k}-rtl`]:o==="rtl"},a==null?void 0:a.className,c,P,R),direction:o,checkable:y&&ue.createElement("span",{className:`${k}-checkbox-inner`}),selectable:w,switcherIcon:B,draggable:z}),b))}),fO=0,Vm=1,pO=2;function t1(e,t,n){const{key:r,children:o}=n;function i(a){const s=a[r],c=a[o];t(s,a)!==!1&&t1(c||[],t,n)}e.forEach(i)}function cY(e){let{treeData:t,expandedKeys:n,startKey:r,endKey:o,fieldNames:i}=e;const a=[];let s=fO;if(r&&r===o)return[r];if(!r||!o)return[];function c(u){return u===r||u===o}return t1(t,u=>{if(s===pO)return!1;if(c(u)){if(a.push(u),s===fO)s=Vm;else if(s===Vm)return s=pO,!1}else s===Vm&&a.push(u);return n.includes(u)},ec(i)),a}function Wm(e,t,n){const r=Se(t),o=[];return t1(e,(i,a)=>{const s=r.indexOf(i);return s!==-1&&(o.push(a),r.splice(s,1)),!!r.length},ec(n)),o}var vO=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{var{defaultExpandAll:n,defaultExpandParent:r,defaultExpandedKeys:o}=e,i=vO(e,["defaultExpandAll","defaultExpandParent","defaultExpandedKeys"]);const a=d.useRef(),s=d.useRef(),c=()=>{const{keyEntities:M}=nh(hO(i));let P;return n?P=Object.keys(M):r?P=Ab(i.expandedKeys||o||[],M):P=i.expandedKeys||o||[],P},[u,p]=d.useState(i.selectedKeys||i.defaultSelectedKeys||[]),[v,h]=d.useState(()=>c());d.useEffect(()=>{"selectedKeys"in i&&p(i.selectedKeys)},[i.selectedKeys]),d.useEffect(()=>{"expandedKeys"in i&&h(i.expandedKeys)},[i.expandedKeys]);const m=(M,P)=>{var R;return"expandedKeys"in i||h(M),(R=i.onExpand)===null||R===void 0?void 0:R.call(i,M,P)},b=(M,P)=>{var R;const{multiple:A,fieldNames:V}=i,{node:z,nativeEvent:B}=P,{key:_=""}=z,H=hO(i),j=Object.assign(Object.assign({},P),{selected:!0}),L=(B==null?void 0:B.ctrlKey)||(B==null?void 0:B.metaKey),F=B==null?void 0:B.shiftKey;let U;A&&L?(U=M,a.current=_,s.current=U,j.selectedNodes=Wm(H,U,V)):A&&F?(U=Array.from(new Set([].concat(Se(s.current||[]),Se(cY({treeData:H,expandedKeys:v,startKey:_,endKey:a.current,fieldNames:V}))))),j.selectedNodes=Wm(H,U,V)):(U=[_],a.current=_,s.current=U,j.selectedNodes=Wm(H,U,V)),(R=i.onSelect)===null||R===void 0||R.call(i,U,j),"selectedKeys"in i||p(U)},{getPrefixCls:y,direction:w}=d.useContext(ht),{prefixCls:C,className:S,showIcon:E=!0,expandAction:k="click"}=i,O=vO(i,["prefixCls","className","showIcon","expandAction"]),$=y("tree",C),T=ie(`${$}-directory`,{[`${$}-directory-rtl`]:w==="rtl"},S);return d.createElement(CN,Object.assign({icon:uY,ref:t,blockNode:!0},O,{showIcon:E,expandAction:k,prefixCls:$,className:T,expandedKeys:v,selectedKeys:u,onSelect:b,onExpand:m}))},fY=d.forwardRef(dY),n1=CN;n1.DirectoryTree=fY;n1.TreeNode=tc;const gO=e=>{const{value:t,filterSearch:n,tablePrefixCls:r,locale:o,onChange:i}=e;return n?d.createElement("div",{className:`${r}-filter-dropdown-search`},d.createElement(Fo,{prefix:d.createElement(Ew,null),placeholder:o.filterSearchPlaceholder,onChange:i,value:t,htmlSize:1,className:`${r}-filter-dropdown-search-input`})):null},pY=e=>{const{keyCode:t}=e;t===De.ENTER&&e.stopPropagation()},vY=d.forwardRef((e,t)=>d.createElement("div",{className:e.className,onClick:n=>n.stopPropagation(),onKeyDown:pY,ref:t},e.children));function Rl(e){let t=[];return(e||[]).forEach(n=>{let{value:r,children:o}=n;t.push(r),o&&(t=[].concat(Se(t),Se(Rl(o))))}),t}function hY(e){return e.some(t=>{let{children:n}=t;return n})}function EN(e,t){return typeof t=="string"||typeof t=="number"?t==null?void 0:t.toString().toLowerCase().includes(e.trim().toLowerCase()):!1}function kN(e){let{filters:t,prefixCls:n,filteredKeys:r,filterMultiple:o,searchValue:i,filterSearch:a}=e;return t.map((s,c)=>{const u=String(s.value);if(s.children)return{key:u||c,label:s.text,popupClassName:`${n}-dropdown-submenu`,children:kN({filters:s.children,prefixCls:n,filteredKeys:r,filterMultiple:o,searchValue:i,filterSearch:a})};const p=o?Ns:Td,v={key:s.value!==void 0?u:c,label:d.createElement(d.Fragment,null,d.createElement(p,{checked:r.includes(u)}),d.createElement("span",null,s.text))};return i.trim()?typeof a=="function"?a(i,s)?v:null:EN(i,s.text)?v:null:v})}function Um(e){return e||[]}const gY=e=>{var t,n;const{tablePrefixCls:r,prefixCls:o,column:i,dropdownPrefixCls:a,columnKey:s,filterOnClose:c,filterMultiple:u,filterMode:p="menu",filterSearch:v=!1,filterState:h,triggerFilter:m,locale:b,children:y,getPopupContainer:w,rootClassName:C}=e,{filterDropdownOpen:S,onFilterDropdownOpenChange:E,filterResetToDefaultFilteredValue:k,defaultFilteredValue:O,filterDropdownVisible:$,onFilterDropdownVisibleChange:T}=i,[M,P]=d.useState(!1),R=!!(h&&(!((t=h.filteredKeys)===null||t===void 0)&&t.length||h.forceFiltered)),A=ge=>{P(ge),E==null||E(ge),T==null||T(ge)},V=(n=S??$)!==null&&n!==void 0?n:M,z=h==null?void 0:h.filteredKeys,[B,_]=_G(Um(z)),H=ge=>{let{selectedKeys:Re}=ge;_(Re)},j=(ge,Re)=>{let{node:ye,checked:Te}=Re;H(u?{selectedKeys:ge}:{selectedKeys:Te&&ye.key?[ye.key]:[]})};d.useEffect(()=>{M&&H({selectedKeys:Um(z)})},[z]);const[L,F]=d.useState([]),U=ge=>{F(ge)},[D,W]=d.useState(""),G=ge=>{const{value:Re}=ge.target;W(Re)};d.useEffect(()=>{M||W("")},[M]);const q=ge=>{const Re=ge!=null&&ge.length?ge:null;if(Re===null&&(!h||!h.filteredKeys)||zi(Re,h==null?void 0:h.filteredKeys,!0))return null;m({column:i,key:s,filteredKeys:Re})},J=()=>{A(!1),q(B())},Y=function(){let{confirm:ge,closeDropdown:Re}=arguments.length>0&&arguments[0]!==void 0?arguments[0]:{confirm:!1,closeDropdown:!1};ge&&q([]),Re&&A(!1),W(""),_(k?(O||[]).map(ye=>String(ye)):[])},Q=function(){let{closeDropdown:ge}=arguments.length>0&&arguments[0]!==void 0?arguments[0]:{closeDropdown:!0};ge&&A(!1),q(B())},te=(ge,Re)=>{Re.source==="trigger"&&(ge&&z!==void 0&&_(Um(z)),A(ge),!ge&&!i.filterDropdown&&c&&J())},ce=ie({[`${a}-menu-without-submenu`]:!hY(i.filters||[])}),se=ge=>{if(ge.target.checked){const Re=Rl(i==null?void 0:i.filters).map(ye=>String(ye));_(Re)}else _([])},ne=ge=>{let{filters:Re}=ge;return(Re||[]).map((ye,Te)=>{const Ae=String(ye.value),me={title:ye.text,key:ye.value!==void 0?Ae:String(Te)};return ye.children&&(me.children=ne({filters:ye.children})),me})},ae=ge=>{var Re;return Object.assign(Object.assign({},ge),{text:ge.title,value:ge.key,children:((Re=ge.children)===null||Re===void 0?void 0:Re.map(ye=>ae(ye)))||[]})};let ee;const{direction:re,renderEmpty:le}=d.useContext(ht);if(typeof i.filterDropdown=="function")ee=i.filterDropdown({prefixCls:`${a}-custom`,setSelectedKeys:ge=>H({selectedKeys:ge}),selectedKeys:B(),confirm:Q,clearFilters:Y,filters:i.filters,visible:V,close:()=>{A(!1)}});else if(i.filterDropdown)ee=i.filterDropdown;else{const ge=B()||[],Re=()=>{var Te;const Ae=(Te=le==null?void 0:le("Table.filter"))!==null&&Te!==void 0?Te:d.createElement(Ji,{image:Ji.PRESENTED_IMAGE_SIMPLE,description:b.filterEmptyText,imageStyle:{height:24},style:{margin:0,padding:"16px 0"}});if((i.filters||[]).length===0)return Ae;if(p==="tree")return d.createElement(d.Fragment,null,d.createElement(gO,{filterSearch:v,value:D,onChange:G,tablePrefixCls:r,locale:b}),d.createElement("div",{className:`${r}-filter-dropdown-tree`},u?d.createElement(Ns,{checked:ge.length===Rl(i.filters).length,indeterminate:ge.length>0&&ge.lengthtypeof v=="function"?v(D,ae(Le)):EN(D,Le.title):void 0})));const me=kN({filters:i.filters||[],filterSearch:v,prefixCls:o,filteredKeys:B(),filterMultiple:u,searchValue:D}),Ie=me.every(Le=>Le===null);return d.createElement(d.Fragment,null,d.createElement(gO,{filterSearch:v,value:D,onChange:G,tablePrefixCls:r,locale:b}),Ie?Ae:d.createElement(pc,{selectable:!0,multiple:u,prefixCls:`${a}-menu`,className:ce,onSelect:H,onDeselect:H,selectedKeys:ge,getPopupContainer:w,openKeys:L,onOpenChange:U,items:me}))},ye=()=>k?zi((O||[]).map(Te=>String(Te)),ge,!0):ge.length===0;ee=d.createElement(d.Fragment,null,Re(),d.createElement("div",{className:`${o}-dropdown-btns`},d.createElement(jr,{type:"link",size:"small",disabled:ye(),onClick:()=>Y()},b.filterReset),d.createElement(jr,{type:"primary",size:"small",onClick:J},b.filterConfirm)))}i.filterDropdown&&(ee=d.createElement(WP,{selectable:void 0},ee));const pe=()=>d.createElement(vY,{className:`${o}-dropdown`},ee);let Oe;return typeof i.filterIcon=="function"?Oe=i.filterIcon(R):i.filterIcon?Oe=i.filterIcon:Oe=d.createElement(Wq,null),d.createElement("div",{className:`${o}-column`},d.createElement("span",{className:`${r}-column-title`},y),d.createElement(Uw,{dropdownRender:pe,trigger:["click"],open:V,onOpenChange:te,getPopupContainer:w,placement:re==="rtl"?"bottomLeft":"bottomRight",rootClassName:C},d.createElement("span",{role:"button",tabIndex:-1,className:ie(`${o}-trigger`,{active:R}),onClick:ge=>{ge.stopPropagation()}},Oe)))},Vb=(e,t,n)=>{let r=[];return(e||[]).forEach((o,i)=>{var a;const s=yc(i,n);if(o.filters||"filterDropdown"in o||"onFilter"in o)if("filteredValue"in o){let c=o.filteredValue;"filterDropdown"in o||(c=(a=c==null?void 0:c.map(String))!==null&&a!==void 0?a:c),r.push({column:o,key:Ga(o,s),filteredKeys:c,forceFiltered:o.filtered})}else r.push({column:o,key:Ga(o,s),filteredKeys:t&&o.defaultFilteredValue?o.defaultFilteredValue:void 0,forceFiltered:o.filtered});"children"in o&&(r=[].concat(Se(r),Se(Vb(o.children,t,s))))}),r};function ON(e,t,n,r,o,i,a,s,c){return n.map((u,p)=>{const v=yc(p,s),{filterOnClose:h=!0,filterMultiple:m=!0,filterMode:b,filterSearch:y}=u;let w=u;if(w.filters||w.filterDropdown){const C=Ga(w,v),S=r.find(E=>{let{key:k}=E;return C===k});w=Object.assign(Object.assign({},w),{title:E=>d.createElement(gY,{tablePrefixCls:e,prefixCls:`${e}-filter`,dropdownPrefixCls:t,column:w,columnKey:C,filterState:S,filterOnClose:h,filterMultiple:m,filterMode:b,filterSearch:y,triggerFilter:i,locale:o,getPopupContainer:a,rootClassName:c},sh(u.title,E))})}return"children"in w&&(w=Object.assign(Object.assign({},w),{children:ON(e,t,w.children,r,o,i,a,v,c)})),w})}const mO=e=>{const t={};return e.forEach(n=>{let{key:r,filteredKeys:o,column:i}=n;const a=r,{filters:s,filterDropdown:c}=i;if(c)t[a]=o||null;else if(Array.isArray(o)){const u=Rl(s);t[a]=u.filter(p=>o.includes(String(p)))}else t[a]=null}),t},Wb=(e,t,n)=>t.reduce((o,i)=>{const{column:{onFilter:a,filters:s},filteredKeys:c}=i;return a&&c&&c.length?o.map(u=>Object.assign({},u)).filter(u=>c.some(p=>{const v=Rl(s),h=v.findIndex(b=>String(b)===String(p)),m=h!==-1?v[h]:p;return u[n]&&(u[n]=Wb(u[n],t,n)),a(m,u)})):o},e),$N=e=>e.flatMap(t=>"children"in t?[t].concat(Se($N(t.children||[]))):[t]),mY=e=>{const{prefixCls:t,dropdownPrefixCls:n,mergedColumns:r,onFilterChange:o,getPopupContainer:i,locale:a,rootClassName:s}=e;As();const c=d.useMemo(()=>$N(r||[]),[r]),[u,p]=d.useState(()=>Vb(c,!0)),v=d.useMemo(()=>{const y=Vb(c,!1);if(y.length===0)return y;let w=!0;if(y.forEach(C=>{let{filteredKeys:S}=C;S!==void 0&&(w=!1)}),w){const C=(c||[]).map((S,E)=>Ga(S,yc(E)));return u.filter(S=>{let{key:E}=S;return C.includes(E)}).map(S=>{const E=c[C.findIndex(k=>k===S.key)];return Object.assign(Object.assign({},S),{column:Object.assign(Object.assign({},S.column),E),forceFiltered:E.filtered})})}return y},[c,u]),h=d.useMemo(()=>mO(v),[v]),m=y=>{const w=v.filter(C=>{let{key:S}=C;return S!==y.key});w.push(y),p(w),o(mO(w),w)};return[y=>ON(t,n,y,v,a,m,i,void 0,s),v,h]},bY=(e,t,n)=>{const r=d.useRef({});function o(i){var a;if(!r.current||r.current.data!==e||r.current.childrenColumnName!==t||r.current.getRowKey!==n){let c=function(u){u.forEach((p,v)=>{const h=n(p,v);s.set(h,p),p&&typeof p=="object"&&t in p&&c(p[t]||[])})};const s=new Map;c(e),r.current={data:e,childrenColumnName:t,kvMap:s,getRowKey:n}}return(a=r.current.kvMap)===null||a===void 0?void 0:a.get(i)}return[o]};var yY=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{const i=e[o];typeof i!="function"&&(n[o]=i)}),n}function xY(e,t,n){const r=n&&typeof n=="object"?n:{},{total:o=0}=r,i=yY(r,["total"]),[a,s]=d.useState(()=>({current:"defaultCurrent"in i?i.defaultCurrent:1,pageSize:"defaultPageSize"in i?i.defaultPageSize:IN})),c=_U(a,i,{total:o>0?o:e}),u=Math.ceil((o||e)/c.pageSize);c.current>u&&(c.current=u||1);const p=(h,m)=>{s({current:h??1,pageSize:m||c.pageSize})},v=(h,m)=>{var b;n&&((b=n.onChange)===null||b===void 0||b.call(n,h,m)),p(h,m),t(h,m||(c==null?void 0:c.pageSize))};return n===!1?[{},()=>{}]:[Object.assign(Object.assign({},c),{onChange:v}),p]}const jp="ascend",Km="descend",dv=e=>typeof e.sorter=="object"&&typeof e.sorter.multiple=="number"?e.sorter.multiple:!1,bO=e=>typeof e=="function"?e:e&&typeof e=="object"&&e.compare?e.compare:!1,SY=(e,t)=>t?e[e.indexOf(t)+1]:e[0],Ub=(e,t,n)=>{let r=[];const o=(i,a)=>{r.push({column:i,key:Ga(i,a),multiplePriority:dv(i),sortOrder:i.sortOrder})};return(e||[]).forEach((i,a)=>{const s=yc(a,n);i.children?("sortOrder"in i&&o(i,s),r=[].concat(Se(r),Se(Ub(i.children,t,s)))):i.sorter&&("sortOrder"in i?o(i,s):t&&i.defaultSortOrder&&r.push({column:i,key:Ga(i,s),multiplePriority:dv(i),sortOrder:i.defaultSortOrder}))}),r},TN=(e,t,n,r,o,i,a,s)=>(t||[]).map((u,p)=>{const v=yc(p,s);let h=u;if(h.sorter){const m=h.sortDirections||o,b=h.showSorterTooltip===void 0?a:h.showSorterTooltip,y=Ga(h,v),w=n.find(P=>{let{key:R}=P;return R===y}),C=w?w.sortOrder:null,S=SY(m,C);let E;if(u.sortIcon)E=u.sortIcon({sortOrder:C});else{const P=m.includes(jp)&&d.createElement(gq,{className:ie(`${e}-column-sorter-up`,{active:C===jp})}),R=m.includes(Km)&&d.createElement(pq,{className:ie(`${e}-column-sorter-down`,{active:C===Km})});E=d.createElement("span",{className:ie(`${e}-column-sorter`,{[`${e}-column-sorter-full`]:!!(P&&R)})},d.createElement("span",{className:`${e}-column-sorter-inner`,"aria-hidden":"true"},P,R))}const{cancelSort:k,triggerAsc:O,triggerDesc:$}=i||{};let T=k;S===Km?T=$:S===jp&&(T=O);const M=typeof b=="object"?Object.assign({title:T},b):{title:T};h=Object.assign(Object.assign({},h),{className:ie(h.className,{[`${e}-column-sort`]:C}),title:P=>{const R=`${e}-column-sorters`,A=d.createElement("span",{className:`${e}-column-title`},sh(u.title,P)),V=d.createElement("div",{className:R},A,E);return b?typeof b!="boolean"&&(b==null?void 0:b.target)==="sorter-icon"?d.createElement("div",{className:`${R} ${e}-column-sorters-tooltip-target-sorter`},A,d.createElement(gi,Object.assign({},M),E)):d.createElement(gi,Object.assign({},M),V):V},onHeaderCell:P=>{var R;const A=((R=u.onHeaderCell)===null||R===void 0?void 0:R.call(u,P))||{},V=A.onClick,z=A.onKeyDown;A.onClick=H=>{r({column:u,key:y,sortOrder:S,multiplePriority:dv(u)}),V==null||V(H)},A.onKeyDown=H=>{H.keyCode===De.ENTER&&(r({column:u,key:y,sortOrder:S,multiplePriority:dv(u)}),z==null||z(H))};const B=FG(u.title,{}),_=B==null?void 0:B.toString();return C?A["aria-sort"]=C==="ascend"?"ascending":"descending":A["aria-label"]=_||"",A.className=ie(A.className,`${e}-column-has-sorters`),A.tabIndex=0,u.ellipsis&&(A.title=(B??"").toString()),A}})}return"children"in h&&(h=Object.assign(Object.assign({},h),{children:TN(e,h.children,n,r,o,i,a,v)})),h}),yO=e=>{const{column:t,sortOrder:n}=e;return{column:t,order:n,field:t.dataIndex,columnKey:t.key}},wO=e=>{const t=e.filter(n=>{let{sortOrder:r}=n;return r}).map(yO);if(t.length===0&&e.length){const n=e.length-1;return Object.assign(Object.assign({},yO(e[n])),{column:void 0,order:void 0,field:void 0,columnKey:void 0})}return t.length<=1?t[0]||{}:t},Kb=(e,t,n)=>{const r=t.slice().sort((a,s)=>s.multiplePriority-a.multiplePriority),o=e.slice(),i=r.filter(a=>{let{column:{sorter:s},sortOrder:c}=a;return bO(s)&&c});return i.length?o.sort((a,s)=>{for(let c=0;c{const s=a[n];return s?Object.assign(Object.assign({},a),{[n]:Kb(s,t,n)}):a}):o},CY=e=>{const{prefixCls:t,mergedColumns:n,sortDirections:r,tableLocale:o,showSorterTooltip:i,onSorterChange:a}=e,[s,c]=d.useState(Ub(n,!0)),u=(y,w)=>{const C=[];return y.forEach((S,E)=>{const k=yc(E,w);if(C.push(Ga(S,k)),Array.isArray(S.children)){const O=u(S.children,k);C.push.apply(C,Se(O))}}),C},p=d.useMemo(()=>{let y=!0;const w=Ub(n,!1);if(!w.length){const k=u(n);return s.filter(O=>{let{key:$}=O;return k.includes($)})}const C=[];function S(k){y?C.push(k):C.push(Object.assign(Object.assign({},k),{sortOrder:null}))}let E=null;return w.forEach(k=>{E===null?(S(k),k.sortOrder&&(k.multiplePriority===!1?y=!1:E=!0)):(E&&k.multiplePriority!==!1||(y=!1),S(k))}),C},[n,s]),v=d.useMemo(()=>{var y,w;const C=p.map(S=>{let{column:E,sortOrder:k}=S;return{column:E,order:k}});return{sortColumns:C,sortColumn:(y=C[0])===null||y===void 0?void 0:y.column,sortOrder:(w=C[0])===null||w===void 0?void 0:w.order}},[p]),h=y=>{let w;y.multiplePriority===!1||!p.length||p[0].multiplePriority===!1?w=[y]:w=[].concat(Se(p.filter(C=>{let{key:S}=C;return S!==y.key})),[y]),c(w),a(wO(w),w)};return[y=>TN(t,y,p,h,r,o,i),p,v,()=>wO(p)]},PN=(e,t)=>e.map(r=>{const o=Object.assign({},r);return o.title=sh(r.title,t),"children"in o&&(o.children=PN(o.children,t)),o}),EY=e=>[d.useCallback(n=>PN(n,e),[e])],kY=hN((e,t)=>{const{_renderTimes:n}=e,{_renderTimes:r}=t;return n!==r}),OY=mN((e,t)=>{const{_renderTimes:n}=e,{_renderTimes:r}=t;return n!==r}),$Y=e=>{const{componentCls:t,lineWidth:n,lineType:r,tableBorderColor:o,tableHeaderBg:i,tablePaddingVertical:a,tablePaddingHorizontal:s,calc:c}=e,u=`${de(n)} ${r} ${o}`,p=(v,h,m)=>({[`&${t}-${v}`]:{[`> ${t}-container`]:{[`> ${t}-content, > ${t}-body`]:{"\n > table > tbody > tr > th,\n > table > tbody > tr > td\n ":{[`> ${t}-expanded-row-fixed`]:{margin:`${de(c(h).mul(-1).equal())} - ${de(c(c(m).add(n)).mul(-1).equal())}`}}}}}});return{[`${t}-wrapper`]:{[`${t}${t}-bordered`]:Object.assign(Object.assign(Object.assign({[`> ${t}-title`]:{border:u,borderBottom:0},[`> ${t}-container`]:{borderInlineStart:u,borderTop:u,[` - > ${t}-content, - > ${t}-header, - > ${t}-body, - > ${t}-summary - `]:{"> table":{"\n > thead > tr > th,\n > thead > tr > td,\n > tbody > tr > th,\n > tbody > tr > td,\n > tfoot > tr > th,\n > tfoot > tr > td\n ":{borderInlineEnd:u},"> thead":{"> tr:not(:last-child) > th":{borderBottom:u},"> tr > th::before":{backgroundColor:"transparent !important"}},"\n > thead > tr,\n > tbody > tr,\n > tfoot > tr\n ":{[`> ${t}-cell-fix-right-first::after`]:{borderInlineEnd:u}},"\n > tbody > tr > th,\n > tbody > tr > td\n ":{[`> ${t}-expanded-row-fixed`]:{margin:`${de(c(a).mul(-1).equal())} ${de(c(c(s).add(n)).mul(-1).equal())}`,"&::after":{position:"absolute",top:0,insetInlineEnd:n,bottom:0,borderInlineEnd:u,content:'""'}}}}}},[`&${t}-scroll-horizontal`]:{[`> ${t}-container > ${t}-body`]:{"> table > tbody":{[` - > tr${t}-expanded-row, - > tr${t}-placeholder - `]:{"> th, > td":{borderInlineEnd:0}}}}}},p("middle",e.tablePaddingVerticalMiddle,e.tablePaddingHorizontalMiddle)),p("small",e.tablePaddingVerticalSmall,e.tablePaddingHorizontalSmall)),{[`> ${t}-footer`]:{border:u,borderTop:0}}),[`${t}-cell`]:{[`${t}-container:first-child`]:{borderTop:0},"&-scrollbar:not([rowspan])":{boxShadow:`0 ${de(n)} 0 ${de(n)} ${i}`}},[`${t}-bordered ${t}-cell-scrollbar`]:{borderInlineEnd:u}}}},IY=e=>{const{componentCls:t}=e;return{[`${t}-wrapper`]:{[`${t}-cell-ellipsis`]:Object.assign(Object.assign({},Ka),{wordBreak:"keep-all",[` - &${t}-cell-fix-left-last, - &${t}-cell-fix-right-first - `]:{overflow:"visible",[`${t}-cell-content`]:{display:"block",overflow:"hidden",textOverflow:"ellipsis"}},[`${t}-column-title`]:{overflow:"hidden",textOverflow:"ellipsis",wordBreak:"keep-all"}})}}},TY=e=>{const{componentCls:t}=e;return{[`${t}-wrapper`]:{[`${t}-tbody > tr${t}-placeholder`]:{textAlign:"center",color:e.colorTextDisabled,"\n &:hover > th,\n &:hover > td,\n ":{background:e.colorBgContainer}}}}},PY=e=>{const{componentCls:t,antCls:n,motionDurationSlow:r,lineWidth:o,paddingXS:i,lineType:a,tableBorderColor:s,tableExpandIconBg:c,tableExpandColumnWidth:u,borderRadius:p,tablePaddingVertical:v,tablePaddingHorizontal:h,tableExpandedRowBg:m,paddingXXS:b,expandIconMarginTop:y,expandIconSize:w,expandIconHalfInner:C,expandIconScale:S,calc:E}=e,k=`${de(o)} ${a} ${s}`,O=E(b).sub(o).equal();return{[`${t}-wrapper`]:{[`${t}-expand-icon-col`]:{width:u},[`${t}-row-expand-icon-cell`]:{textAlign:"center",[`${t}-row-expand-icon`]:{display:"inline-flex",float:"none",verticalAlign:"sub"}},[`${t}-row-indent`]:{height:1,float:"left"},[`${t}-row-expand-icon`]:Object.assign(Object.assign({},Qy(e)),{position:"relative",float:"left",width:w,height:w,color:"inherit",lineHeight:de(w),background:c,border:k,borderRadius:p,transform:`scale(${S})`,"&:focus, &:hover, &:active":{borderColor:"currentcolor"},"&::before, &::after":{position:"absolute",background:"currentcolor",transition:`transform ${r} ease-out`,content:'""'},"&::before":{top:C,insetInlineEnd:O,insetInlineStart:O,height:o},"&::after":{top:O,bottom:O,insetInlineStart:C,width:o,transform:"rotate(90deg)"},"&-collapsed::before":{transform:"rotate(-180deg)"},"&-collapsed::after":{transform:"rotate(0deg)"},"&-spaced":{"&::before, &::after":{display:"none",content:"none"},background:"transparent",border:0,visibility:"hidden"}}),[`${t}-row-indent + ${t}-row-expand-icon`]:{marginTop:y,marginInlineEnd:i},[`tr${t}-expanded-row`]:{"&, &:hover":{"> th, > td":{background:m}},[`${n}-descriptions-view`]:{display:"flex",table:{flex:"auto",width:"100%"}}},[`${t}-expanded-row-fixed`]:{position:"relative",margin:`${de(E(v).mul(-1).equal())} ${de(E(h).mul(-1).equal())}`,padding:`${de(v)} ${de(h)}`}}}},MY=e=>{const{componentCls:t,antCls:n,iconCls:r,tableFilterDropdownWidth:o,tableFilterDropdownSearchWidth:i,paddingXXS:a,paddingXS:s,colorText:c,lineWidth:u,lineType:p,tableBorderColor:v,headerIconColor:h,fontSizeSM:m,tablePaddingHorizontal:b,borderRadius:y,motionDurationSlow:w,colorTextDescription:C,colorPrimary:S,tableHeaderFilterActiveBg:E,colorTextDisabled:k,tableFilterDropdownBg:O,tableFilterDropdownHeight:$,controlItemBgHover:T,controlItemBgActive:M,boxShadowSecondary:P,filterDropdownMenuBg:R,calc:A}=e,V=`${n}-dropdown`,z=`${t}-filter-dropdown`,B=`${n}-tree`,_=`${de(u)} ${p} ${v}`;return[{[`${t}-wrapper`]:{[`${t}-filter-column`]:{display:"flex",justifyContent:"space-between"},[`${t}-filter-trigger`]:{position:"relative",display:"flex",alignItems:"center",marginBlock:A(a).mul(-1).equal(),marginInline:`${de(a)} ${de(A(b).div(2).mul(-1).equal())}`,padding:`0 ${de(a)}`,color:h,fontSize:m,borderRadius:y,cursor:"pointer",transition:`all ${w}`,"&:hover":{color:C,background:E},"&.active":{color:S}}}},{[`${n}-dropdown`]:{[z]:Object.assign(Object.assign({},jn(e)),{minWidth:o,backgroundColor:O,borderRadius:y,boxShadow:P,overflow:"hidden",[`${V}-menu`]:{maxHeight:$,overflowX:"hidden",border:0,boxShadow:"none",borderRadius:"unset",backgroundColor:R,"&:empty::after":{display:"block",padding:`${de(s)} 0`,color:k,fontSize:m,textAlign:"center",content:'"Not Found"'}},[`${z}-tree`]:{paddingBlock:`${de(s)} 0`,paddingInline:s,[B]:{padding:0},[`${B}-treenode ${B}-node-content-wrapper:hover`]:{backgroundColor:T},[`${B}-treenode-checkbox-checked ${B}-node-content-wrapper`]:{"&, &:hover":{backgroundColor:M}}},[`${z}-search`]:{padding:s,borderBottom:_,"&-input":{input:{minWidth:i},[r]:{color:k}}},[`${z}-checkall`]:{width:"100%",marginBottom:a,marginInlineStart:a},[`${z}-btns`]:{display:"flex",justifyContent:"space-between",padding:`${de(A(s).sub(u).equal())} ${de(s)}`,overflow:"hidden",borderTop:_}})}},{[`${n}-dropdown ${z}, ${z}-submenu`]:{[`${n}-checkbox-wrapper + span`]:{paddingInlineStart:s,color:c},"> ul":{maxHeight:"calc(100vh - 130px)",overflowX:"hidden",overflowY:"auto"}}}]},NY=e=>{const{componentCls:t,lineWidth:n,colorSplit:r,motionDurationSlow:o,zIndexTableFixed:i,tableBg:a,zIndexTableSticky:s,calc:c}=e,u=r;return{[`${t}-wrapper`]:{[` - ${t}-cell-fix-left, - ${t}-cell-fix-right - `]:{position:"sticky !important",zIndex:i,background:a},[` - ${t}-cell-fix-left-first::after, - ${t}-cell-fix-left-last::after - `]:{position:"absolute",top:0,right:{_skip_check_:!0,value:0},bottom:c(n).mul(-1).equal(),width:30,transform:"translateX(100%)",transition:`box-shadow ${o}`,content:'""',pointerEvents:"none"},[`${t}-cell-fix-left-all::after`]:{display:"none"},[` - ${t}-cell-fix-right-first::after, - ${t}-cell-fix-right-last::after - `]:{position:"absolute",top:0,bottom:c(n).mul(-1).equal(),left:{_skip_check_:!0,value:0},width:30,transform:"translateX(-100%)",transition:`box-shadow ${o}`,content:'""',pointerEvents:"none"},[`${t}-container`]:{position:"relative","&::before, &::after":{position:"absolute",top:0,bottom:0,zIndex:c(s).add(1).equal({unit:!1}),width:30,transition:`box-shadow ${o}`,content:'""',pointerEvents:"none"},"&::before":{insetInlineStart:0},"&::after":{insetInlineEnd:0}},[`${t}-ping-left`]:{[`&:not(${t}-has-fix-left) ${t}-container::before`]:{boxShadow:`inset 10px 0 8px -8px ${u}`},[` - ${t}-cell-fix-left-first::after, - ${t}-cell-fix-left-last::after - `]:{boxShadow:`inset 10px 0 8px -8px ${u}`},[`${t}-cell-fix-left-last::before`]:{backgroundColor:"transparent !important"}},[`${t}-ping-right`]:{[`&:not(${t}-has-fix-right) ${t}-container::after`]:{boxShadow:`inset -10px 0 8px -8px ${u}`},[` - ${t}-cell-fix-right-first::after, - ${t}-cell-fix-right-last::after - `]:{boxShadow:`inset -10px 0 8px -8px ${u}`}},[`${t}-fixed-column-gapped`]:{[` - ${t}-cell-fix-left-first::after, - ${t}-cell-fix-left-last::after, - ${t}-cell-fix-right-first::after, - ${t}-cell-fix-right-last::after - `]:{boxShadow:"none"}}}}},RY=e=>{const{componentCls:t,antCls:n,margin:r}=e;return{[`${t}-wrapper`]:{[`${t}-pagination${n}-pagination`]:{margin:`${de(r)} 0`},[`${t}-pagination`]:{display:"flex",flexWrap:"wrap",rowGap:e.paddingXS,"> *":{flex:"none"},"&-left":{justifyContent:"flex-start"},"&-center":{justifyContent:"center"},"&-right":{justifyContent:"flex-end"}}}}},DY=e=>{const{componentCls:t,tableRadius:n}=e;return{[`${t}-wrapper`]:{[t]:{[`${t}-title, ${t}-header`]:{borderRadius:`${de(n)} ${de(n)} 0 0`},[`${t}-title + ${t}-container`]:{borderStartStartRadius:0,borderStartEndRadius:0,[`${t}-header, table`]:{borderRadius:0},"table > thead > tr:first-child":{"th:first-child, th:last-child, td:first-child, td:last-child":{borderRadius:0}}},"&-container":{borderStartStartRadius:n,borderStartEndRadius:n,"table > thead > tr:first-child":{"> *:first-child":{borderStartStartRadius:n},"> *:last-child":{borderStartEndRadius:n}}},"&-footer":{borderRadius:`0 0 ${de(n)} ${de(n)}`}}}}},jY=e=>{const{componentCls:t}=e;return{[`${t}-wrapper-rtl`]:{direction:"rtl",table:{direction:"rtl"},[`${t}-pagination-left`]:{justifyContent:"flex-end"},[`${t}-pagination-right`]:{justifyContent:"flex-start"},[`${t}-row-expand-icon`]:{float:"right","&::after":{transform:"rotate(-90deg)"},"&-collapsed::before":{transform:"rotate(180deg)"},"&-collapsed::after":{transform:"rotate(0deg)"}},[`${t}-container`]:{"&::before":{insetInlineStart:"unset",insetInlineEnd:0},"&::after":{insetInlineStart:0,insetInlineEnd:"unset"},[`${t}-row-indent`]:{float:"right"}}}}},LY=e=>{const{componentCls:t,antCls:n,iconCls:r,fontSizeIcon:o,padding:i,paddingXS:a,headerIconColor:s,headerIconHoverColor:c,tableSelectionColumnWidth:u,tableSelectedRowBg:p,tableSelectedRowHoverBg:v,tableRowHoverBg:h,tablePaddingHorizontal:m,calc:b}=e;return{[`${t}-wrapper`]:{[`${t}-selection-col`]:{width:u,[`&${t}-selection-col-with-dropdown`]:{width:b(u).add(o).add(b(i).div(4)).equal()}},[`${t}-bordered ${t}-selection-col`]:{width:b(u).add(b(a).mul(2)).equal(),[`&${t}-selection-col-with-dropdown`]:{width:b(u).add(o).add(b(i).div(4)).add(b(a).mul(2)).equal()}},[` - table tr th${t}-selection-column, - table tr td${t}-selection-column, - ${t}-selection-column - `]:{paddingInlineEnd:e.paddingXS,paddingInlineStart:e.paddingXS,textAlign:"center",[`${n}-radio-wrapper`]:{marginInlineEnd:0}},[`table tr th${t}-selection-column${t}-cell-fix-left`]:{zIndex:b(e.zIndexTableFixed).add(1).equal({unit:!1})},[`table tr th${t}-selection-column::after`]:{backgroundColor:"transparent !important"},[`${t}-selection`]:{position:"relative",display:"inline-flex",flexDirection:"column"},[`${t}-selection-extra`]:{position:"absolute",top:0,zIndex:1,cursor:"pointer",transition:`all ${e.motionDurationSlow}`,marginInlineStart:"100%",paddingInlineStart:de(b(m).div(4).equal()),[r]:{color:s,fontSize:o,verticalAlign:"baseline","&:hover":{color:c}}},[`${t}-tbody`]:{[`${t}-row`]:{[`&${t}-row-selected`]:{[`> ${t}-cell`]:{background:p,"&-row-hover":{background:v}}},[`> ${t}-cell-row-hover`]:{background:h}}}}}},BY=e=>{const{componentCls:t,tableExpandColumnWidth:n,calc:r}=e,o=(i,a,s,c)=>({[`${t}${t}-${i}`]:{fontSize:c,[` - ${t}-title, - ${t}-footer, - ${t}-cell, - ${t}-thead > tr > th, - ${t}-tbody > tr > th, - ${t}-tbody > tr > td, - tfoot > tr > th, - tfoot > tr > td - `]:{padding:`${de(a)} ${de(s)}`},[`${t}-filter-trigger`]:{marginInlineEnd:de(r(s).div(2).mul(-1).equal())},[`${t}-expanded-row-fixed`]:{margin:`${de(r(a).mul(-1).equal())} ${de(r(s).mul(-1).equal())}`},[`${t}-tbody`]:{[`${t}-wrapper:only-child ${t}`]:{marginBlock:de(r(a).mul(-1).equal()),marginInline:`${de(r(n).sub(s).equal())} ${de(r(s).mul(-1).equal())}`}},[`${t}-selection-extra`]:{paddingInlineStart:de(r(s).div(4).equal())}}});return{[`${t}-wrapper`]:Object.assign(Object.assign({},o("middle",e.tablePaddingVerticalMiddle,e.tablePaddingHorizontalMiddle,e.tableFontSizeMiddle)),o("small",e.tablePaddingVerticalSmall,e.tablePaddingHorizontalSmall,e.tableFontSizeSmall))}},AY=e=>{const{componentCls:t,marginXXS:n,fontSizeIcon:r,headerIconColor:o,headerIconHoverColor:i}=e;return{[`${t}-wrapper`]:{[`${t}-thead th${t}-column-has-sorters`]:{outline:"none",cursor:"pointer",transition:`all ${e.motionDurationSlow}, left 0s`,"&:hover":{background:e.tableHeaderSortHoverBg,"&::before":{backgroundColor:"transparent !important"}},"&:focus-visible":{color:e.colorPrimary},[` - &${t}-cell-fix-left:hover, - &${t}-cell-fix-right:hover - `]:{background:e.tableFixedHeaderSortActiveBg}},[`${t}-thead th${t}-column-sort`]:{background:e.tableHeaderSortBg,"&::before":{backgroundColor:"transparent !important"}},[`td${t}-column-sort`]:{background:e.tableBodySortBg},[`${t}-column-title`]:{position:"relative",zIndex:1,flex:1},[`${t}-column-sorters`]:{display:"flex",flex:"auto",alignItems:"center",justifyContent:"space-between","&::after":{position:"absolute",inset:0,width:"100%",height:"100%",content:'""'}},[`${t}-column-sorters-tooltip-target-sorter`]:{"&::after":{content:"none"}},[`${t}-column-sorter`]:{marginInlineStart:n,color:o,fontSize:0,transition:`color ${e.motionDurationSlow}`,"&-inner":{display:"inline-flex",flexDirection:"column",alignItems:"center"},"&-up, &-down":{fontSize:r,"&.active":{color:e.colorPrimary}},[`${t}-column-sorter-up + ${t}-column-sorter-down`]:{marginTop:"-0.3em"}},[`${t}-column-sorters:hover ${t}-column-sorter`]:{color:i}}}},zY=e=>{const{componentCls:t,opacityLoading:n,tableScrollThumbBg:r,tableScrollThumbBgHover:o,tableScrollThumbSize:i,tableScrollBg:a,zIndexTableSticky:s,stickyScrollBarBorderRadius:c,lineWidth:u,lineType:p,tableBorderColor:v}=e,h=`${de(u)} ${p} ${v}`;return{[`${t}-wrapper`]:{[`${t}-sticky`]:{"&-holder":{position:"sticky",zIndex:s,background:e.colorBgContainer},"&-scroll":{position:"sticky",bottom:0,height:`${de(i)} !important`,zIndex:s,display:"flex",alignItems:"center",background:a,borderTop:h,opacity:n,"&:hover":{transformOrigin:"center bottom"},"&-bar":{height:i,backgroundColor:r,borderRadius:c,transition:`all ${e.motionDurationSlow}, transform none`,position:"absolute",bottom:0,"&:hover, &-active":{backgroundColor:o}}}}}}},xO=e=>{const{componentCls:t,lineWidth:n,tableBorderColor:r,calc:o}=e,i=`${de(n)} ${e.lineType} ${r}`;return{[`${t}-wrapper`]:{[`${t}-summary`]:{position:"relative",zIndex:e.zIndexTableFixed,background:e.tableBg,"> tr":{"> th, > td":{borderBottom:i}}},[`div${t}-summary`]:{boxShadow:`0 ${de(o(n).mul(-1).equal())} 0 ${r}`}}}},HY=e=>{const{componentCls:t,motionDurationMid:n,lineWidth:r,lineType:o,tableBorderColor:i,calc:a}=e,s=`${de(r)} ${o} ${i}`,c=`${t}-expanded-row-cell`;return{[`${t}-wrapper`]:{[`${t}-tbody-virtual`]:{[`${t}-tbody-virtual-holder-inner`]:{[` - & > ${t}-row, - & > div:not(${t}-row) > ${t}-row - `]:{display:"flex",boxSizing:"border-box",width:"100%"}},[`${t}-cell`]:{borderBottom:s,transition:`background ${n}`},[`${t}-expanded-row`]:{[`${c}${c}-fixed`]:{position:"sticky",insetInlineStart:0,overflow:"hidden",width:`calc(var(--virtual-width) - ${de(r)})`,borderInlineEnd:"none"}}},[`${t}-bordered`]:{[`${t}-tbody-virtual`]:{"&:after":{content:'""',insetInline:0,bottom:0,borderBottom:s,position:"absolute"},[`${t}-cell`]:{borderInlineEnd:s,[`&${t}-cell-fix-right-first:before`]:{content:'""',position:"absolute",insetBlock:0,insetInlineStart:a(r).mul(-1).equal(),borderInlineStart:s}}},[`&${t}-virtual`]:{[`${t}-placeholder ${t}-cell`]:{borderInlineEnd:s,borderBottom:s}}}}}},FY=e=>{const{componentCls:t,fontWeightStrong:n,tablePaddingVertical:r,tablePaddingHorizontal:o,tableExpandColumnWidth:i,lineWidth:a,lineType:s,tableBorderColor:c,tableFontSize:u,tableBg:p,tableRadius:v,tableHeaderTextColor:h,motionDurationMid:m,tableHeaderBg:b,tableHeaderCellSplitColor:y,tableFooterTextColor:w,tableFooterBg:C,calc:S}=e,E=`${de(a)} ${s} ${c}`;return{[`${t}-wrapper`]:Object.assign(Object.assign({clear:"both",maxWidth:"100%"},Ps()),{[t]:Object.assign(Object.assign({},jn(e)),{fontSize:u,background:p,borderRadius:`${de(v)} ${de(v)} 0 0`,scrollbarColor:`${e.tableScrollThumbBg} ${e.tableScrollBg}`}),table:{width:"100%",textAlign:"start",borderRadius:`${de(v)} ${de(v)} 0 0`,borderCollapse:"separate",borderSpacing:0},[` - ${t}-cell, - ${t}-thead > tr > th, - ${t}-tbody > tr > th, - ${t}-tbody > tr > td, - tfoot > tr > th, - tfoot > tr > td - `]:{position:"relative",padding:`${de(r)} ${de(o)}`,overflowWrap:"break-word"},[`${t}-title`]:{padding:`${de(r)} ${de(o)}`},[`${t}-thead`]:{"\n > tr > th,\n > tr > td\n ":{position:"relative",color:h,fontWeight:n,textAlign:"start",background:b,borderBottom:E,transition:`background ${m} ease`,"&[colspan]:not([colspan='1'])":{textAlign:"center"},[`&:not(:last-child):not(${t}-selection-column):not(${t}-row-expand-icon-cell):not([colspan])::before`]:{position:"absolute",top:"50%",insetInlineEnd:0,width:1,height:"1.6em",backgroundColor:y,transform:"translateY(-50%)",transition:`background-color ${m}`,content:'""'}},"> tr:not(:last-child) > th[colspan]":{borderBottom:0}},[`${t}-tbody`]:{"> tr":{"> th, > td":{transition:`background ${m}, border-color ${m}`,borderBottom:E,[` - > ${t}-wrapper:only-child, - > ${t}-expanded-row-fixed > ${t}-wrapper:only-child - `]:{[t]:{marginBlock:de(S(r).mul(-1).equal()),marginInline:`${de(S(i).sub(o).equal())} - ${de(S(o).mul(-1).equal())}`,[`${t}-tbody > tr:last-child > td`]:{borderBottom:0,"&:first-child, &:last-child":{borderRadius:0}}}}},"> th":{position:"relative",color:h,fontWeight:n,textAlign:"start",background:b,borderBottom:E,transition:`background ${m} ease`}}},[`${t}-footer`]:{padding:`${de(r)} ${de(o)}`,color:w,background:C}})}},_Y=e=>{const{colorFillAlter:t,colorBgContainer:n,colorTextHeading:r,colorFillSecondary:o,colorFillContent:i,controlItemBgActive:a,controlItemBgActiveHover:s,padding:c,paddingSM:u,paddingXS:p,colorBorderSecondary:v,borderRadiusLG:h,controlHeight:m,colorTextPlaceholder:b,fontSize:y,fontSizeSM:w,lineHeight:C,lineWidth:S,colorIcon:E,colorIconHover:k,opacityLoading:O,controlInteractiveSize:$}=e,T=new xn(o).onBackground(n).toHexShortString(),M=new xn(i).onBackground(n).toHexShortString(),P=new xn(t).onBackground(n).toHexShortString(),R=new xn(E),A=new xn(k),V=$/2-S,z=V*2+S*3;return{headerBg:P,headerColor:r,headerSortActiveBg:T,headerSortHoverBg:M,bodySortBg:P,rowHoverBg:P,rowSelectedBg:a,rowSelectedHoverBg:s,rowExpandedBg:t,cellPaddingBlock:c,cellPaddingInline:c,cellPaddingBlockMD:u,cellPaddingInlineMD:p,cellPaddingBlockSM:p,cellPaddingInlineSM:p,borderColor:v,headerBorderRadius:h,footerBg:P,footerColor:r,cellFontSize:y,cellFontSizeMD:y,cellFontSizeSM:y,headerSplitColor:v,fixedHeaderSortActiveBg:T,headerFilterHoverBg:i,filterDropdownMenuBg:n,filterDropdownBg:n,expandIconBg:n,selectionColumnWidth:m,stickyScrollBarBg:b,stickyScrollBarBorderRadius:100,expandIconMarginTop:(y*C-S*3)/2-Math.ceil((w*1.4-S*3)/2),headerIconColor:R.clone().setAlpha(R.getAlpha()*O).toRgbString(),headerIconHoverColor:A.clone().setAlpha(A.getAlpha()*O).toRgbString(),expandIconHalfInner:V,expandIconSize:z,expandIconScale:$/z}},SO=2,VY=In("Table",e=>{const{colorTextHeading:t,colorSplit:n,colorBgContainer:r,controlInteractiveSize:o,headerBg:i,headerColor:a,headerSortActiveBg:s,headerSortHoverBg:c,bodySortBg:u,rowHoverBg:p,rowSelectedBg:v,rowSelectedHoverBg:h,rowExpandedBg:m,cellPaddingBlock:b,cellPaddingInline:y,cellPaddingBlockMD:w,cellPaddingInlineMD:C,cellPaddingBlockSM:S,cellPaddingInlineSM:E,borderColor:k,footerBg:O,footerColor:$,headerBorderRadius:T,cellFontSize:M,cellFontSizeMD:P,cellFontSizeSM:R,headerSplitColor:A,fixedHeaderSortActiveBg:V,headerFilterHoverBg:z,filterDropdownBg:B,expandIconBg:_,selectionColumnWidth:H,stickyScrollBarBg:j,calc:L}=e,F=vn(e,{tableFontSize:M,tableBg:r,tableRadius:T,tablePaddingVertical:b,tablePaddingHorizontal:y,tablePaddingVerticalMiddle:w,tablePaddingHorizontalMiddle:C,tablePaddingVerticalSmall:S,tablePaddingHorizontalSmall:E,tableBorderColor:k,tableHeaderTextColor:a,tableHeaderBg:i,tableFooterTextColor:$,tableFooterBg:O,tableHeaderCellSplitColor:A,tableHeaderSortBg:s,tableHeaderSortHoverBg:c,tableBodySortBg:u,tableFixedHeaderSortActiveBg:V,tableHeaderFilterActiveBg:z,tableFilterDropdownBg:B,tableRowHoverBg:p,tableSelectedRowBg:v,tableSelectedRowHoverBg:h,zIndexTableFixed:SO,zIndexTableSticky:L(SO).add(1).equal({unit:!1}),tableFontSizeMiddle:P,tableFontSizeSmall:R,tableSelectionColumnWidth:H,tableExpandIconBg:_,tableExpandColumnWidth:L(o).add(L(e.padding).mul(2)).equal(),tableExpandedRowBg:m,tableFilterDropdownWidth:120,tableFilterDropdownHeight:264,tableFilterDropdownSearchWidth:140,tableScrollThumbSize:8,tableScrollThumbBg:j,tableScrollThumbBgHover:t,tableScrollBg:n});return[FY(F),RY(F),xO(F),AY(F),MY(F),$Y(F),DY(F),PY(F),xO(F),TY(F),LY(F),NY(F),zY(F),IY(F),BY(F),jY(F),HY(F)]},_Y,{unitless:{expandIconScale:!0}}),WY=[],UY=(e,t)=>{var n,r;const{prefixCls:o,className:i,rootClassName:a,style:s,size:c,bordered:u,dropdownPrefixCls:p,dataSource:v,pagination:h,rowSelection:m,rowKey:b="key",rowClassName:y,columns:w,children:C,childrenColumnName:S,onChange:E,getPopupContainer:k,loading:O,expandIcon:$,expandable:T,expandedRowRender:M,expandIconColumnIndex:P,indentSize:R,scroll:A,sortDirections:V,locale:z,showSorterTooltip:B={target:"full-header"},virtual:_}=e;As();const H=d.useMemo(()=>w||Yw(C),[w,C]),j=d.useMemo(()=>H.some(pt=>pt.responsive),[H]),L=yP(j),F=d.useMemo(()=>{const pt=new Set(Object.keys(L).filter(dt=>L[dt]));return H.filter(dt=>!dt.responsive||dt.responsive.some($t=>pt.has($t)))},[H,L]),U=Ln(e,["className","style","columns"]),{locale:D=hi,direction:W,table:G,renderEmpty:q,getPrefixCls:J,getPopupContainer:Y}=d.useContext(ht),Q=Go(c),te=Object.assign(Object.assign({},D.Table),z),ce=v||WY,se=J("table",o),ne=J("dropdown",p),[,ae]=Ir(),ee=br(se),[re,le,pe]=VY(se,ee),Oe=Object.assign(Object.assign({childrenColumnName:S,expandIconColumnIndex:P},T),{expandIcon:(n=T==null?void 0:T.expandIcon)!==null&&n!==void 0?n:(r=G==null?void 0:G.expandable)===null||r===void 0?void 0:r.expandIcon}),{childrenColumnName:ge="children"}=Oe,Re=d.useMemo(()=>ce.some(pt=>pt==null?void 0:pt[ge])?"nest":M||T!=null&&T.expandedRowRender?"row":null,[ce]),ye={body:d.useRef()},Te=HG(se),Ae=d.useRef(null),me=d.useRef(null);AG(t,()=>Object.assign(Object.assign({},me.current),{nativeElement:Ae.current}));const Ie=d.useMemo(()=>typeof b=="function"?b:pt=>pt==null?void 0:pt[b],[b]),[Le]=bY(ce,ge,Ie),Be={},et=function(pt,dt){let $t=arguments.length>2&&arguments[2]!==void 0?arguments[2]:!1;var kt,Kt,ln,Yt;const un=Object.assign(Object.assign({},Be),pt);$t&&((kt=Be.resetPagination)===null||kt===void 0||kt.call(Be),!((Kt=un.pagination)===null||Kt===void 0)&&Kt.current&&(un.pagination.current=1),h&&((ln=h.onChange)===null||ln===void 0||ln.call(h,1,(Yt=un.pagination)===null||Yt===void 0?void 0:Yt.pageSize))),A&&A.scrollToFirstRowOnChange!==!1&&ye.body.current&&x5(0,{getContainer:()=>ye.body.current}),E==null||E(un.pagination,un.filters,un.sorter,{currentDataSource:Wb(Kb(ce,un.sorterStates,ge),un.filterStates,ge),action:dt})},rt=(pt,dt)=>{et({sorter:pt,sorterStates:dt},"sort",!1)},[Ze,Ve,Ye,Ge]=CY({prefixCls:se,mergedColumns:F,onSorterChange:rt,sortDirections:V||["ascend","descend"],tableLocale:te,showSorterTooltip:B}),Fe=d.useMemo(()=>Kb(ce,Ve,ge),[ce,Ve]);Be.sorter=Ge(),Be.sorterStates=Ve;const we=(pt,dt)=>{et({filters:pt,filterStates:dt},"filter",!0)},[ze,Me,Pe]=mY({prefixCls:se,locale:te,dropdownPrefixCls:ne,mergedColumns:F,onFilterChange:we,getPopupContainer:k||Y,rootClassName:ie(a,ee)}),Ke=Wb(Fe,Me,ge);Be.filters=Pe,Be.filterStates=Me;const St=d.useMemo(()=>{const pt={};return Object.keys(Pe).forEach(dt=>{Pe[dt]!==null&&(pt[dt]=Pe[dt])}),Object.assign(Object.assign({},Ye),{filters:pt})},[Ye,Pe]),[Ft]=EY(St),Lt=(pt,dt)=>{et({pagination:Object.assign(Object.assign({},Be.pagination),{current:pt,pageSize:dt})},"paginate")},[Ct,Xt]=xY(Ke.length,Lt,h);Be.pagination=h===!1?{}:wY(Ct,h),Be.resetPagination=Xt;const Pt=d.useMemo(()=>{if(h===!1||!Ct.pageSize)return Ke;const{current:pt=1,total:dt,pageSize:$t=IN}=Ct;return Ke.length$t?Ke.slice((pt-1)*$t,pt*$t):Ke:Ke.slice((pt-1)*$t,pt*$t)},[!!h,Ke,Ct==null?void 0:Ct.current,Ct==null?void 0:Ct.pageSize,Ct==null?void 0:Ct.total]),[Gt,ft]=LG({prefixCls:se,data:Ke,pageData:Pt,getRowKey:Ie,getRecordByKey:Le,expandType:Re,childrenColumnName:ge,locale:te,getPopupContainer:k||Y},m),Je=(pt,dt,$t)=>{let kt;return typeof y=="function"?kt=ie(y(pt,dt,$t)):kt=ie(y),ie({[`${se}-row-selected`]:ft.has(Ie(pt,dt))},kt)};Oe.__PARENT_RENDER_ICON__=Oe.expandIcon,Oe.expandIcon=Oe.expandIcon||$||zG(te),Re==="nest"&&Oe.expandIconColumnIndex===void 0?Oe.expandIconColumnIndex=m?1:0:Oe.expandIconColumnIndex>0&&m&&(Oe.expandIconColumnIndex-=1),typeof Oe.indentSize!="number"&&(Oe.indentSize=typeof R=="number"?R:15);const He=d.useCallback(pt=>Ft(Gt(ze(Ze(pt)))),[Ze,ze,Gt]);let We,Et;if(h!==!1&&(Ct!=null&&Ct.total)){let pt;Ct.size?pt=Ct.size:pt=Q==="small"||Q==="middle"?"small":void 0;const dt=Kt=>d.createElement(cK,Object.assign({},Ct,{className:ie(`${se}-pagination ${se}-pagination-${Kt}`,Ct.className),size:pt})),$t=W==="rtl"?"left":"right",{position:kt}=Ct;if(kt!==null&&Array.isArray(kt)){const Kt=kt.find(un=>un.includes("top")),ln=kt.find(un=>un.includes("bottom")),Yt=kt.every(un=>`${un}`=="none");!Kt&&!ln&&!Yt&&(Et=dt($t)),Kt&&(We=dt(Kt.toLowerCase().replace("top",""))),ln&&(Et=dt(ln.toLowerCase().replace("bottom","")))}else Et=dt($t)}let wt;typeof O=="boolean"?wt={spinning:O}:typeof O=="object"&&(wt=Object.assign({spinning:!0},O));const _e=ie(pe,ee,`${se}-wrapper`,G==null?void 0:G.className,{[`${se}-wrapper-rtl`]:W==="rtl"},i,a,le),qe=Object.assign(Object.assign({},G==null?void 0:G.style),s),ot=typeof(z==null?void 0:z.emptyText)<"u"?z.emptyText:(q==null?void 0:q("Table"))||d.createElement(Xv,{componentName:"Table"}),at=_?OY:kY,xt={},_t=d.useMemo(()=>{const{fontSize:pt,lineHeight:dt,padding:$t,paddingXS:kt,paddingSM:Kt}=ae,ln=Math.floor(pt*dt);switch(Q){case"large":return $t*2+ln;case"small":return kt*2+ln;default:return Kt*2+ln}},[ae,Q]);return _&&(xt.listItemHeight=_t),re(d.createElement("div",{ref:Ae,className:_e,style:qe},d.createElement(XM,Object.assign({spinning:!1},wt),We,d.createElement(at,Object.assign({},xt,U,{ref:me,columns:F,direction:W,expandable:Oe,prefixCls:se,className:ie({[`${se}-middle`]:Q==="middle",[`${se}-small`]:Q==="small",[`${se}-bordered`]:u,[`${se}-empty`]:ce.length===0},pe,ee,le),data:Pt,rowKey:Ie,rowClassName:Je,emptyText:ot,internalHooks:Nd,internalRefs:ye,transformColumns:He,getContainerWidth:Te})),Et)))},KY=d.forwardRef(UY),qY=(e,t)=>{const n=d.useRef(0);return n.current+=1,d.createElement(KY,Object.assign({},e,{ref:t,_renderTimes:n.current}))},ca=d.forwardRef(qY);ca.SELECTION_COLUMN=Na;ca.EXPAND_COLUMN=ja;ca.SELECTION_ALL=zb;ca.SELECTION_INVERT=Hb;ca.SELECTION_NONE=Fb;ca.Column=kG;ca.ColumnGroup=OG;ca.Summary=iN;const XY=e=>{const{paddingXXS:t,lineWidth:n,tagPaddingHorizontal:r,componentCls:o,calc:i}=e,a=i(r).sub(n).equal(),s=i(t).sub(n).equal();return{[o]:Object.assign(Object.assign({},jn(e)),{display:"inline-block",height:"auto",marginInlineEnd:e.marginXS,paddingInline:a,fontSize:e.tagFontSize,lineHeight:e.tagLineHeight,whiteSpace:"nowrap",background:e.defaultBg,border:`${de(e.lineWidth)} ${e.lineType} ${e.colorBorder}`,borderRadius:e.borderRadiusSM,opacity:1,transition:`all ${e.motionDurationMid}`,textAlign:"start",position:"relative",[`&${o}-rtl`]:{direction:"rtl"},"&, a, a:hover":{color:e.defaultColor},[`${o}-close-icon`]:{marginInlineStart:s,fontSize:e.tagIconSize,color:e.colorTextDescription,cursor:"pointer",transition:`all ${e.motionDurationMid}`,"&:hover":{color:e.colorTextHeading}},[`&${o}-has-color`]:{borderColor:"transparent",[`&, a, a:hover, ${e.iconCls}-close, ${e.iconCls}-close:hover`]:{color:e.colorTextLightSolid}},"&-checkable":{backgroundColor:"transparent",borderColor:"transparent",cursor:"pointer",[`&:not(${o}-checkable-checked):hover`]:{color:e.colorPrimary,backgroundColor:e.colorFillSecondary},"&:active, &-checked":{color:e.colorTextLightSolid},"&-checked":{backgroundColor:e.colorPrimary,"&:hover":{backgroundColor:e.colorPrimaryHover}},"&:active":{backgroundColor:e.colorPrimaryActive}},"&-hidden":{display:"none"},[`> ${e.iconCls} + span, > span + ${e.iconCls}`]:{marginInlineStart:a}}),[`${o}-borderless`]:{borderColor:"transparent",background:e.tagBorderlessBg}}},r1=e=>{const{lineWidth:t,fontSizeIcon:n,calc:r}=e,o=e.fontSizeSM;return vn(e,{tagFontSize:o,tagLineHeight:de(r(e.lineHeightSM).mul(o).equal()),tagIconSize:r(n).sub(r(t).mul(2)).equal(),tagPaddingHorizontal:8,tagBorderlessBg:e.defaultBg})},o1=e=>({defaultBg:new xn(e.colorFillQuaternary).onBackground(e.colorBgContainer).toHexString(),defaultColor:e.colorText}),MN=In("Tag",e=>{const t=r1(e);return XY(t)},o1);var GY=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{const{prefixCls:n,style:r,className:o,checked:i,onChange:a,onClick:s}=e,c=GY(e,["prefixCls","style","className","checked","onChange","onClick"]),{getPrefixCls:u,tag:p}=d.useContext(ht),v=C=>{a==null||a(!i),s==null||s(C)},h=u("tag",n),[m,b,y]=MN(h),w=ie(h,`${h}-checkable`,{[`${h}-checkable-checked`]:i},p==null?void 0:p.className,o,b,y);return m(d.createElement("span",Object.assign({},c,{ref:t,style:Object.assign(Object.assign({},r),p==null?void 0:p.style),className:w,onClick:v})))}),QY=e=>NI(e,(t,n)=>{let{textColor:r,lightBorderColor:o,lightColor:i,darkColor:a}=n;return{[`${e.componentCls}${e.componentCls}-${t}`]:{color:r,background:i,borderColor:o,"&-inverse":{color:e.colorTextLightSolid,background:a,borderColor:a},[`&${e.componentCls}-borderless`]:{borderColor:"transparent"}}}}),ZY=ic(["Tag","preset"],e=>{const t=r1(e);return QY(t)},o1);function JY(e){return typeof e!="string"?e:e.charAt(0).toUpperCase()+e.slice(1)}const np=(e,t,n)=>{const r=JY(n);return{[`${e.componentCls}${e.componentCls}-${t}`]:{color:e[`color${n}`],background:e[`color${r}Bg`],borderColor:e[`color${r}Border`],[`&${e.componentCls}-borderless`]:{borderColor:"transparent"}}}},eQ=ic(["Tag","status"],e=>{const t=r1(e);return[np(t,"success","Success"),np(t,"processing","Info"),np(t,"error","Error"),np(t,"warning","Warning")]},o1);var tQ=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{const{prefixCls:n,className:r,rootClassName:o,style:i,children:a,icon:s,color:c,onClose:u,bordered:p=!0,visible:v}=e,h=tQ(e,["prefixCls","className","rootClassName","style","children","icon","color","onClose","bordered","visible"]),{getPrefixCls:m,direction:b,tag:y}=d.useContext(ht),[w,C]=d.useState(!0),S=Ln(h,["closeIcon","closable"]);d.useEffect(()=>{v!==void 0&&C(v)},[v]);const E=CP(c),k=S7(c),O=E||k,$=Object.assign(Object.assign({backgroundColor:c&&!O?c:void 0},y==null?void 0:y.style),i),T=m("tag",n),[M,P,R]=MN(T),A=ie(T,y==null?void 0:y.className,{[`${T}-${c}`]:O,[`${T}-has-color`]:c&&!O,[`${T}-hidden`]:!w,[`${T}-rtl`]:b==="rtl",[`${T}-borderless`]:!p},r,o,P,R),V=L=>{L.stopPropagation(),u==null||u(L),!L.defaultPrevented&&C(!1)},[,z]=IT(Zp(e),Zp(y),{closable:!1,closeIconRender:L=>{const F=d.createElement("span",{className:`${T}-close-icon`,onClick:V},L);return ZI(L,F,U=>({onClick:D=>{var W;(W=U==null?void 0:U.onClick)===null||W===void 0||W.call(U,D),V(D)},className:ie(U==null?void 0:U.className,`${T}-close-icon`)}))}}),B=typeof h.onClick=="function"||a&&a.type==="a",_=s||null,H=_?d.createElement(d.Fragment,null,_,a&&d.createElement("span",null,a)):a,j=d.createElement("span",Object.assign({},S,{ref:t,className:A,style:$}),H,z,E&&d.createElement(ZY,{key:"preset",prefixCls:T}),k&&d.createElement(eQ,{key:"status",prefixCls:T}));return M(B?d.createElement(Lv,{component:"Tag"},j):j)}),NN=nQ;NN.CheckableTag=YY;const rQ=e=>{const t=e!=null&&e.algorithm?qu(e.algorithm):qu(md),n=Object.assign(Object.assign({},ql),e==null?void 0:e.token);return Z2(n,{override:e==null?void 0:e.token},t,Yy)};function oQ(e){const{sizeUnit:t,sizeStep:n}=e,r=n-2;return{sizeXXL:t*(r+10),sizeXL:t*(r+6),sizeLG:t*(r+2),sizeMD:t*(r+2),sizeMS:t*(r+1),size:t*r,sizeSM:t*r,sizeXS:t*(r-1),sizeXXS:t*(r-1)}}const iQ=(e,t)=>{const n=t??md(e),r=n.fontSizeSM,o=n.controlHeight-4;return Object.assign(Object.assign(Object.assign(Object.assign(Object.assign({},n),oQ(t??e)),CI(r)),{controlHeight:o}),SI(Object.assign(Object.assign({},n),{controlHeight:o})))},jo=(e,t)=>new xn(e).setAlpha(t).toRgbString(),ml=(e,t)=>new xn(e).lighten(t).toHexString(),aQ=e=>{const t=$s(e,{theme:"dark"});return{1:t[0],2:t[1],3:t[2],4:t[3],5:t[6],6:t[5],7:t[4],8:t[6],9:t[5],10:t[4]}},sQ=(e,t)=>{const n=e||"#000",r=t||"#fff";return{colorBgBase:n,colorTextBase:r,colorText:jo(r,.85),colorTextSecondary:jo(r,.65),colorTextTertiary:jo(r,.45),colorTextQuaternary:jo(r,.25),colorFill:jo(r,.18),colorFillSecondary:jo(r,.12),colorFillTertiary:jo(r,.08),colorFillQuaternary:jo(r,.04),colorBgSolid:jo(r,.95),colorBgSolidHover:jo(r,1),colorBgSolidActive:jo(r,.9),colorBgElevated:ml(n,12),colorBgContainer:ml(n,8),colorBgLayout:ml(n,0),colorBgSpotlight:ml(n,26),colorBgBlur:jo(r,.04),colorBorder:ml(n,26),colorBorderSecondary:ml(n,19)}},lQ=(e,t)=>{const n=Object.keys(Ky).map(o=>{const i=$s(e[o],{theme:"dark"});return new Array(10).fill(1).reduce((a,s,c)=>(a[`${o}-${c+1}`]=i[c],a[`${o}${c+1}`]=i[c],a),{})}).reduce((o,i)=>(o=Object.assign(Object.assign({},o),i),o),{}),r=t??md(e);return Object.assign(Object.assign(Object.assign({},r),n),xI(e,{generateColorPalettes:aQ,generateNeutralColorPalettes:sQ}))};function cQ(){const[e,t,n]=Ir();return{theme:e,token:t,hashId:n}}const CO={defaultSeed:Yu.token,useToken:cQ,defaultAlgorithm:md,darkAlgorithm:lQ,compactAlgorithm:iQ,getDesignToken:rQ,defaultConfig:Yu,_internalContext:qy},uQ=(e,t,n,r)=>{const{titleMarginBottom:o,fontWeightStrong:i}=r;return{marginBottom:o,color:n,fontWeight:i,fontSize:e,lineHeight:t}},dQ=e=>{const t=[1,2,3,4,5],n={};return t.forEach(r=>{n[` - h${r}&, - div&-h${r}, - div&-h${r} > textarea, - h${r} - `]=uQ(e[`fontSizeHeading${r}`],e[`lineHeightHeading${r}`],e.colorTextHeading,e)}),n},fQ=e=>{const{componentCls:t}=e;return{"a&, a":Object.assign(Object.assign({},Qy(e)),{userSelect:"text",[`&[disabled], &${t}-disabled`]:{color:e.colorTextDisabled,cursor:"not-allowed","&:active, &:hover":{color:e.colorTextDisabled},"&:active":{pointerEvents:"none"}}})}},pQ=e=>({code:{margin:"0 0.2em",paddingInline:"0.4em",paddingBlock:"0.2em 0.1em",fontSize:"85%",fontFamily:e.fontFamilyCode,background:"rgba(150, 150, 150, 0.1)",border:"1px solid rgba(100, 100, 100, 0.2)",borderRadius:3},kbd:{margin:"0 0.2em",paddingInline:"0.4em",paddingBlock:"0.15em 0.1em",fontSize:"90%",fontFamily:e.fontFamilyCode,background:"rgba(150, 150, 150, 0.06)",border:"1px solid rgba(100, 100, 100, 0.2)",borderBottomWidth:2,borderRadius:3},mark:{padding:0,backgroundColor:Kp[2]},"u, ins":{textDecoration:"underline",textDecorationSkipInk:"auto"},"s, del":{textDecoration:"line-through"},strong:{fontWeight:600},"ul, ol":{marginInline:0,marginBlock:"0 1em",padding:0,li:{marginInline:"20px 0",marginBlock:0,paddingInline:"4px 0",paddingBlock:0}},ul:{listStyleType:"circle",ul:{listStyleType:"disc"}},ol:{listStyleType:"decimal"},"pre, blockquote":{margin:"1em 0"},pre:{padding:"0.4em 0.6em",whiteSpace:"pre-wrap",wordWrap:"break-word",background:"rgba(150, 150, 150, 0.1)",border:"1px solid rgba(100, 100, 100, 0.2)",borderRadius:3,fontFamily:e.fontFamilyCode,code:{display:"inline",margin:0,padding:0,fontSize:"inherit",fontFamily:"inherit",background:"transparent",border:0}},blockquote:{paddingInline:"0.6em 0",paddingBlock:0,borderInlineStart:"4px solid rgba(100, 100, 100, 0.2)",opacity:.85}}),vQ=e=>{const{componentCls:t,paddingSM:n}=e,r=n;return{"&-edit-content":{position:"relative","div&":{insetInlineStart:e.calc(e.paddingSM).mul(-1).equal(),marginTop:e.calc(r).mul(-1).equal(),marginBottom:`calc(1em - ${de(r)})`},[`${t}-edit-content-confirm`]:{position:"absolute",insetInlineEnd:e.calc(e.marginXS).add(2).equal(),insetBlockEnd:e.marginXS,color:e.colorTextDescription,fontWeight:"normal",fontSize:e.fontSize,fontStyle:"normal",pointerEvents:"none"},textarea:{margin:"0!important",MozTransition:"none",height:"1em"}}}},hQ=e=>({[`${e.componentCls}-copy-success`]:{"\n &,\n &:hover,\n &:focus":{color:e.colorSuccess}},[`${e.componentCls}-copy-icon-only`]:{marginInlineStart:0}}),gQ=()=>({"\n a&-ellipsis,\n span&-ellipsis\n ":{display:"inline-block",maxWidth:"100%"},"&-ellipsis-single-line":{whiteSpace:"nowrap",overflow:"hidden",textOverflow:"ellipsis","a&, span&":{verticalAlign:"bottom"},"> code":{paddingBlock:0,maxWidth:"calc(100% - 1.2em)",display:"inline-block",overflow:"hidden",textOverflow:"ellipsis",verticalAlign:"bottom",boxSizing:"content-box"}},"&-ellipsis-multiple-line":{display:"-webkit-box",overflow:"hidden",WebkitLineClamp:3,WebkitBoxOrient:"vertical"}}),mQ=e=>{const{componentCls:t,titleMarginTop:n}=e;return{[t]:Object.assign(Object.assign(Object.assign(Object.assign(Object.assign(Object.assign(Object.assign(Object.assign(Object.assign({color:e.colorText,wordBreak:"break-word",lineHeight:e.lineHeight,[`&${t}-secondary`]:{color:e.colorTextDescription},[`&${t}-success`]:{color:e.colorSuccess},[`&${t}-warning`]:{color:e.colorWarning},[`&${t}-danger`]:{color:e.colorError,"a&:active, a&:focus":{color:e.colorErrorActive},"a&:hover":{color:e.colorErrorHover}},[`&${t}-disabled`]:{color:e.colorTextDisabled,cursor:"not-allowed",userSelect:"none"},"\n div&,\n p\n ":{marginBottom:"1em"}},dQ(e)),{[` - & + h1${t}, - & + h2${t}, - & + h3${t}, - & + h4${t}, - & + h5${t} - `]:{marginTop:n},"\n div,\n ul,\n li,\n p,\n h1,\n h2,\n h3,\n h4,\n h5":{"\n + h1,\n + h2,\n + h3,\n + h4,\n + h5\n ":{marginTop:n}}}),pQ(e)),fQ(e)),{[` - ${t}-expand, - ${t}-collapse, - ${t}-edit, - ${t}-copy - `]:Object.assign(Object.assign({},Qy(e)),{marginInlineStart:e.marginXXS})}),vQ(e)),hQ(e)),gQ()),{"&-rtl":{direction:"rtl"}})}},bQ=()=>({titleMarginTop:"1.2em",titleMarginBottom:"0.5em"}),RN=In("Typography",e=>[mQ(e)],bQ),yQ=e=>{const{prefixCls:t,"aria-label":n,className:r,style:o,direction:i,maxLength:a,autoSize:s=!0,value:c,onSave:u,onCancel:p,onEnd:v,component:h,enterIcon:m=d.createElement(Rq,null)}=e,b=d.useRef(null),y=d.useRef(!1),w=d.useRef(),[C,S]=d.useState(c);d.useEffect(()=>{S(c)},[c]),d.useEffect(()=>{var B;if(!((B=b.current)===null||B===void 0)&&B.resizableTextArea){const{textArea:_}=b.current.resizableTextArea;_.focus();const{length:H}=_.value;_.setSelectionRange(H,H)}},[]);const E=B=>{let{target:_}=B;S(_.value.replace(/[\n\r]/g,""))},k=()=>{y.current=!0},O=()=>{y.current=!1},$=B=>{let{keyCode:_}=B;y.current||(w.current=_)},T=()=>{u(C.trim())},M=B=>{let{keyCode:_,ctrlKey:H,altKey:j,metaKey:L,shiftKey:F}=B;w.current!==_||y.current||H||j||L||F||(_===De.ENTER?(T(),v==null||v()):_===De.ESC&&p())},P=()=>{T()},[R,A,V]=RN(t),z=ie(t,`${t}-edit-content`,{[`${t}-rtl`]:i==="rtl",[`${t}-${h}`]:!!h},r,A,V);return R(d.createElement("div",{className:z,style:o},d.createElement(TM,{ref:b,maxLength:a,value:C,onChange:E,onKeyDown:$,onKeyUp:M,onCompositionStart:k,onCompositionEnd:O,onBlur:P,"aria-label":n,rows:1,autoSize:s}),m!==null?Dr(m,{className:`${t}-edit-content-confirm`}):null))};var wQ=function(){var e=document.getSelection();if(!e.rangeCount)return function(){};for(var t=document.activeElement,n=[],r=0;r"u"){window.clipboardData.clearData();var v=EO[t.format]||EO.default;window.clipboardData.setData(v,e)}else p.clipboardData.clearData(),p.clipboardData.setData(t.format,e);t.onCopy&&(p.preventDefault(),t.onCopy(p.clipboardData))}),document.body.appendChild(s),i.selectNodeContents(s),a.addRange(i);var u=document.execCommand("copy");if(!u)throw new Error("copy command was unsuccessful");c=!0}catch{try{window.clipboardData.setData(t.format||"text",e),t.onCopy&&t.onCopy(window.clipboardData),c=!0}catch{r=CQ("message"in t?t.message:SQ),window.prompt(r,e)}}finally{a&&(typeof a.removeRange=="function"?a.removeRange(i):a.removeAllRanges()),s&&document.body.removeChild(s),o()}return c}var kQ=EQ;const OQ=js(kQ);var $Q=function(e,t,n,r){function o(i){return i instanceof n?i:new n(function(a){a(i)})}return new(n||(n=Promise))(function(i,a){function s(p){try{u(r.next(p))}catch(v){a(v)}}function c(p){try{u(r.throw(p))}catch(v){a(v)}}function u(p){p.done?i(p.value):o(p.value).then(s,c)}u((r=r.apply(e,t||[])).next())})};const IQ=e=>{let{copyConfig:t,children:n}=e;const[r,o]=d.useState(!1),[i,a]=d.useState(!1),s=d.useRef(null),c=()=>{s.current&&clearTimeout(s.current)},u={};t.format&&(u.format=t.format),d.useEffect(()=>c,[]);const p=gn(v=>$Q(void 0,void 0,void 0,function*(){var h;v==null||v.preventDefault(),v==null||v.stopPropagation(),a(!0);try{const m=typeof t.text=="function"?yield t.text():t.text;OQ(m||SK(n,!0).join("")||"",u),a(!1),o(!0),c(),s.current=setTimeout(()=>{o(!1)},3e3),(h=t.onCopy)===null||h===void 0||h.call(t,v)}catch(m){throw a(!1),m}}));return{copied:r,copyLoading:i,onClick:p}};function qm(e,t){return d.useMemo(()=>{const n=!!e;return[n,Object.assign(Object.assign({},t),n&&typeof e=="object"?e:null)]},[e])}const TQ=e=>{const t=d.useRef();return d.useEffect(()=>{t.current=e}),t.current},PQ=(e,t,n)=>d.useMemo(()=>e===!0?{title:t??n}:d.isValidElement(e)?{title:e}:typeof e=="object"?Object.assign({title:t??n},e):{title:e},[e,t,n]);var MQ=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{const{prefixCls:n,component:r="article",className:o,rootClassName:i,setContentRef:a,children:s,direction:c,style:u}=e,p=MQ(e,["prefixCls","component","className","rootClassName","setContentRef","children","direction","style"]),{getPrefixCls:v,direction:h,typography:m}=d.useContext(ht),b=c??h,y=a?Wr(t,a):t,w=v("typography",n),[C,S,E]=RN(w),k=ie(w,m==null?void 0:m.className,{[`${w}-rtl`]:b==="rtl"},o,i,S,E),O=Object.assign(Object.assign({},m==null?void 0:m.style),u);return C(d.createElement(r,Object.assign({className:k,style:O,ref:y},p),s))});function kO(e){return e===!1?[!1,!1]:Array.isArray(e)?e:[e]}function Xm(e,t,n){return e===!0||e===void 0?t:e||n&&t}function NQ(e){const t=document.createElement("em");e.appendChild(t);const n=e.getBoundingClientRect(),r=t.getBoundingClientRect();return e.removeChild(t),n.left>r.left||r.right>n.right||n.top>r.top||r.bottom>n.bottom}const i1=e=>["string","number"].includes(typeof e),RQ=e=>{let{prefixCls:t,copied:n,locale:r,iconOnly:o,tooltips:i,icon:a,tabIndex:s,onCopy:c,loading:u}=e;const p=kO(i),v=kO(a),{copied:h,copy:m}=r??{},b=n?h:m,y=Xm(p[n?1:0],b),w=typeof y=="string"?y:b;return d.createElement(gi,{title:y},d.createElement("button",{type:"button",className:ie(`${t}-copy`,{[`${t}-copy-success`]:n,[`${t}-copy-icon-only`]:o}),onClick:c,"aria-label":w,tabIndex:s},n?Xm(v[1],d.createElement(Cw,null),!0):Xm(v[0],u?d.createElement(Xa,null):d.createElement(yq,null),!0)))},rp=d.forwardRef((e,t)=>{let{style:n,children:r}=e;const o=d.useRef(null);return d.useImperativeHandle(t,()=>({isExceed:()=>{const i=o.current;return i.scrollHeight>i.clientHeight},getHeight:()=>o.current.clientHeight})),d.createElement("span",{"aria-hidden":!0,ref:o,style:Object.assign({position:"fixed",display:"block",left:0,top:0,pointerEvents:"none",backgroundColor:"rgba(255, 0, 0, 0.65)"},n)},r)}),DQ=e=>e.reduce((t,n)=>t+(i1(n)?String(n).length:1),0);function OO(e,t){let n=0;const r=[];for(let o=0;ot){const u=t-n;return r.push(String(i).slice(0,u)),r}r.push(i),n=c}return e}const Gm=0,Ym=1,Qm=2,Zm=3,$O=4,op={display:"-webkit-box",overflow:"hidden",WebkitBoxOrient:"vertical"};function jQ(e){const{enableMeasure:t,width:n,text:r,children:o,rows:i,expanded:a,miscDeps:s,onEllipsis:c}=e,u=d.useMemo(()=>lo(r),[r]),p=d.useMemo(()=>DQ(u),[r]),v=d.useMemo(()=>o(u,!1),[r]),[h,m]=d.useState(null),b=d.useRef(null),y=d.useRef(null),w=d.useRef(null),C=d.useRef(null),S=d.useRef(null),[E,k]=d.useState(!1),[O,$]=d.useState(Gm),[T,M]=d.useState(0),[P,R]=d.useState(null);sn(()=>{$(t&&n&&p?Ym:Gm)},[n,r,i,t,u]),sn(()=>{var B,_,H,j;if(O===Ym){$(Qm);const L=y.current&&getComputedStyle(y.current).whiteSpace;R(L)}else if(O===Qm){const L=!!(!((B=w.current)===null||B===void 0)&&B.isExceed());$(L?Zm:$O),m(L?[0,p]:null),k(L);const F=((_=w.current)===null||_===void 0?void 0:_.getHeight())||0,U=i===1?0:((H=C.current)===null||H===void 0?void 0:H.getHeight())||0,D=((j=S.current)===null||j===void 0?void 0:j.getHeight())||0,W=Math.max(F,U+D);M(W+1),c(L)}},[O]);const A=h?Math.ceil((h[0]+h[1])/2):0;sn(()=>{var B;const[_,H]=h||[0,0];if(_!==H){const L=(((B=b.current)===null||B===void 0?void 0:B.getHeight())||0)>T;let F=A;H-_===1&&(F=L?_:H),m(L?[_,F]:[F,H])}},[h,A]);const V=d.useMemo(()=>{if(!t)return o(u,!1);if(O!==Zm||!h||h[0]!==h[1]){const B=o(u,!1);return[$O,Gm].includes(O)?B:d.createElement("span",{style:Object.assign(Object.assign({},op),{WebkitLineClamp:i})},B)}return o(a?u:OO(u,h[0]),E)},[a,O,h,u].concat(Se(s))),z={width:n,margin:0,padding:0,whiteSpace:P==="nowrap"?"normal":"inherit"};return d.createElement(d.Fragment,null,V,O===Qm&&d.createElement(d.Fragment,null,d.createElement(rp,{style:Object.assign(Object.assign(Object.assign({},z),op),{WebkitLineClamp:i}),ref:w},v),d.createElement(rp,{style:Object.assign(Object.assign(Object.assign({},z),op),{WebkitLineClamp:i-1}),ref:C},v),d.createElement(rp,{style:Object.assign(Object.assign(Object.assign({},z),op),{WebkitLineClamp:1}),ref:S},o([],!0))),O===Zm&&h&&h[0]!==h[1]&&d.createElement(rp,{style:Object.assign(Object.assign({},z),{top:400}),ref:b},o(OO(u,A),!0)),O===Ym&&d.createElement("span",{style:{whiteSpace:"inherit"},ref:y}))}const LQ=e=>{let{enableEllipsis:t,isEllipsis:n,children:r,tooltipProps:o}=e;return!(o!=null&&o.title)||!t?r:d.createElement(gi,Object.assign({open:n?void 0:!1},o),r)};var BQ=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{var n;const{prefixCls:r,className:o,style:i,type:a,disabled:s,children:c,ellipsis:u,editable:p,copyable:v,component:h,title:m}=e,b=BQ(e,["prefixCls","className","style","type","disabled","children","ellipsis","editable","copyable","component","title"]),{getPrefixCls:y,direction:w}=d.useContext(ht),[C]=bi("Text"),S=d.useRef(null),E=d.useRef(null),k=y("typography",r),O=Ln(b,["mark","code","delete","underline","strong","keyboard","italic"]),[$,T]=qm(p),[M,P]=Dn(!1,{value:T.editing}),{triggerType:R=["icon"]}=T,A=Me=>{var Pe;Me&&((Pe=T.onStart)===null||Pe===void 0||Pe.call(T)),P(Me)},V=TQ(M);sn(()=>{var Me;!M&&V&&((Me=E.current)===null||Me===void 0||Me.focus())},[M]);const z=Me=>{Me==null||Me.preventDefault(),A(!0)},B=Me=>{var Pe;(Pe=T.onChange)===null||Pe===void 0||Pe.call(T,Me),A(!1)},_=()=>{var Me;(Me=T.onCancel)===null||Me===void 0||Me.call(T),A(!1)},[H,j]=qm(v),{copied:L,copyLoading:F,onClick:U}=IQ({copyConfig:j,children:c}),[D,W]=d.useState(!1),[G,q]=d.useState(!1),[J,Y]=d.useState(!1),[Q,te]=d.useState(!1),[ce,se]=d.useState(!0),[ne,ae]=qm(u,{expandable:!1,symbol:Me=>Me?C==null?void 0:C.collapse:C==null?void 0:C.expand}),[ee,re]=Dn(ae.defaultExpanded||!1,{value:ae.expanded}),le=ne&&(!ee||ae.expandable==="collapsible"),{rows:pe=1}=ae,Oe=d.useMemo(()=>le&&(ae.suffix!==void 0||ae.onEllipsis||ae.expandable||$||H),[le,ae,$,H]);sn(()=>{ne&&!Oe&&(W(Sb("webkitLineClamp")),q(Sb("textOverflow")))},[Oe,ne]);const[ge,Re]=d.useState(le),ye=d.useMemo(()=>Oe?!1:pe===1?G:D,[Oe,G,D]);sn(()=>{Re(ye&&le)},[ye,le]);const Te=le&&(ge?Q:J),Ae=le&&pe===1&&ge,me=le&&pe>1&&ge,Ie=(Me,Pe)=>{var Ke;re(Pe.expanded),(Ke=ae.onExpand)===null||Ke===void 0||Ke.call(ae,Me,Pe)},[Le,Be]=d.useState(0),et=Me=>{let{offsetWidth:Pe}=Me;Be(Pe)},rt=Me=>{var Pe;Y(Me),J!==Me&&((Pe=ae.onEllipsis)===null||Pe===void 0||Pe.call(ae,Me))};d.useEffect(()=>{const Me=S.current;if(ne&&ge&&Me){const Pe=NQ(Me);Q!==Pe&&te(Pe)}},[ne,ge,c,me,ce,Le]),d.useEffect(()=>{const Me=S.current;if(typeof IntersectionObserver>"u"||!Me||!ge||!le)return;const Pe=new IntersectionObserver(()=>{se(!!Me.offsetParent)});return Pe.observe(Me),()=>{Pe.disconnect()}},[ge,le]);const Ze=PQ(ae.tooltip,T.text,c),Ve=d.useMemo(()=>{if(!(!ne||ge))return[T.text,c,m,Ze.title].find(i1)},[ne,ge,m,Ze.title,Te]);if(M)return d.createElement(yQ,{value:(n=T.text)!==null&&n!==void 0?n:typeof c=="string"?c:"",onSave:B,onCancel:_,onEnd:T.onEnd,prefixCls:k,className:o,style:i,direction:w,component:h,maxLength:T.maxLength,autoSize:T.autoSize,enterIcon:T.enterIcon});const Ye=()=>{const{expandable:Me,symbol:Pe}=ae;return Me?d.createElement("button",{type:"button",key:"expand",className:`${k}-${ee?"collapse":"expand"}`,onClick:Ke=>Ie(Ke,{expanded:!ee}),"aria-label":ee?C.collapse:C==null?void 0:C.expand},typeof Pe=="function"?Pe(ee):Pe):null},Ge=()=>{if(!$)return;const{icon:Me,tooltip:Pe,tabIndex:Ke}=T,St=lo(Pe)[0]||(C==null?void 0:C.edit),Ft=typeof St=="string"?St:"";return R.includes("icon")?d.createElement(gi,{key:"edit",title:Pe===!1?"":St},d.createElement("button",{type:"button",ref:E,className:`${k}-edit`,onClick:z,"aria-label":Ft,tabIndex:Ke},Me||d.createElement(Pq,{role:"button"}))):null},Fe=()=>H?d.createElement(RQ,Object.assign({key:"copy"},j,{prefixCls:k,copied:L,locale:C,onCopy:U,loading:F,iconOnly:c==null})):null,we=Me=>[Me&&Ye(),Ge(),Fe()],ze=Me=>[Me&&!ee&&d.createElement("span",{"aria-hidden":!0,key:"ellipsis"},zQ),ae.suffix,we(Me)];return d.createElement(qo,{onResize:et,disabled:!le},Me=>d.createElement(LQ,{tooltipProps:Ze,enableEllipsis:le,isEllipsis:Te},d.createElement(DN,Object.assign({className:ie({[`${k}-${a}`]:a,[`${k}-disabled`]:s,[`${k}-ellipsis`]:ne,[`${k}-ellipsis-single-line`]:Ae,[`${k}-ellipsis-multiple-line`]:me},o),prefixCls:r,style:Object.assign(Object.assign({},i),{WebkitLineClamp:me?pe:void 0}),component:h,ref:Wr(Me,S,t),direction:w,onClick:R.includes("text")?z:void 0,"aria-label":Ve==null?void 0:Ve.toString(),title:m},O),d.createElement(jQ,{enableMeasure:le&&!ge,text:c,rows:pe,width:Le,onEllipsis:rt,expanded:ee,miscDeps:[L,ee,F,$,H]},(Pe,Ke)=>AQ(e,d.createElement(d.Fragment,null,Pe.length>0&&Ke&&!ee&&Ve?d.createElement("span",{key:"show-content","aria-hidden":!0},Pe):Pe,ze(Ke)))))))});var HQ=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{var{ellipsis:n,rel:r}=e,o=HQ(e,["ellipsis","rel"]);const i=Object.assign(Object.assign({},o),{rel:r===void 0&&o.target==="_blank"?"noopener noreferrer":r});return delete i.navigate,d.createElement(lh,Object.assign({},i,{ref:t,ellipsis:!!n,component:"a"}))}),_Q=d.forwardRef((e,t)=>d.createElement(lh,Object.assign({ref:t},e,{component:"div"})));var VQ=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{var{ellipsis:n}=e,r=VQ(e,["ellipsis"]);const o=d.useMemo(()=>n&&typeof n=="object"?Ln(n,["expandable","rows"]):n,[n]);return d.createElement(lh,Object.assign({ref:t},r,{ellipsis:o,component:"span"}))},UQ=d.forwardRef(WQ);var KQ=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{const{level:n=1}=e,r=KQ(e,["level"]),o=qQ.includes(n)?`h${n}`:"h1";return d.createElement(lh,Object.assign({ref:t},r,{component:o}))}),Rd=DN;Rd.Text=UQ;Rd.Link=FQ;Rd.Title=XQ;Rd.Paragraph=_Q;const Jm=function(e,t){if(e&&t){var n=Array.isArray(t)?t:t.split(","),r=e.name||"",o=e.type||"",i=o.replace(/\/.*$/,"");return n.some(function(a){var s=a.trim();if(/^\*(\/\*)?$/.test(a))return!0;if(s.charAt(0)==="."){var c=r.toLowerCase(),u=s.toLowerCase(),p=[u];return(u===".jpg"||u===".jpeg")&&(p=[".jpg",".jpeg"]),p.some(function(v){return c.endsWith(v)})}return/\/\*$/.test(s)?i===s.replace(/\/.*$/,""):o===s?!0:/^\w+$/.test(s)?(Fn(!1,"Upload takes an invalidate 'accept' type '".concat(s,"'.Skip for check.")),!0):!1})}return!0};function GQ(e,t){var n="cannot ".concat(e.method," ").concat(e.action," ").concat(t.status,"'"),r=new Error(n);return r.status=t.status,r.method=e.method,r.url=e.action,r}function IO(e){var t=e.responseText||e.response;if(!t)return t;try{return JSON.parse(t)}catch{return t}}function YQ(e){var t=new XMLHttpRequest;e.onProgress&&t.upload&&(t.upload.onprogress=function(i){i.total>0&&(i.percent=i.loaded/i.total*100),e.onProgress(i)});var n=new FormData;e.data&&Object.keys(e.data).forEach(function(o){var i=e.data[o];if(Array.isArray(i)){i.forEach(function(a){n.append("".concat(o,"[]"),a)});return}n.append(o,i)}),e.file instanceof Blob?n.append(e.filename,e.file,e.file.name):n.append(e.filename,e.file),t.onerror=function(i){e.onError(i)},t.onload=function(){return t.status<200||t.status>=300?e.onError(GQ(e,t),IO(t)):e.onSuccess(IO(t),t)},t.open(e.method,e.action,!0),e.withCredentials&&"withCredentials"in t&&(t.withCredentials=!0);var r=e.headers||{};return r["X-Requested-With"]!==null&&t.setRequestHeader("X-Requested-With","XMLHttpRequest"),Object.keys(r).forEach(function(o){r[o]!==null&&t.setRequestHeader(o,r[o])}),t.send(n),{abort:function(){t.abort()}}}var QQ=function(){var e=yo($n().mark(function t(n,r){var o,i,a,s,c,u,p,v;return $n().wrap(function(m){for(;;)switch(m.prev=m.next){case 0:u=function(){return u=yo($n().mark(function y(w){return $n().wrap(function(S){for(;;)switch(S.prev=S.next){case 0:return S.abrupt("return",new Promise(function(E){w.file(function(k){r(k)?(w.fullPath&&!k.webkitRelativePath&&(Object.defineProperties(k,{webkitRelativePath:{writable:!0}}),k.webkitRelativePath=w.fullPath.replace(/^\//,""),Object.defineProperties(k,{webkitRelativePath:{writable:!1}})),E(k)):E(null)})}));case 1:case"end":return S.stop()}},y)})),u.apply(this,arguments)},c=function(y){return u.apply(this,arguments)},s=function(){return s=yo($n().mark(function y(w){var C,S,E,k,O;return $n().wrap(function(T){for(;;)switch(T.prev=T.next){case 0:C=w.createReader(),S=[];case 2:return T.next=5,new Promise(function(M){C.readEntries(M,function(){return M([])})});case 5:if(E=T.sent,k=E.length,k){T.next=9;break}return T.abrupt("break",12);case 9:for(O=0;O{const{componentCls:t,iconCls:n}=e;return{[`${t}-wrapper`]:{[`${t}-drag`]:{position:"relative",width:"100%",height:"100%",textAlign:"center",background:e.colorFillAlter,border:`${de(e.lineWidth)} dashed ${e.colorBorder}`,borderRadius:e.borderRadiusLG,cursor:"pointer",transition:`border-color ${e.motionDurationSlow}`,[t]:{padding:e.padding},[`${t}-btn`]:{display:"table",width:"100%",height:"100%",outline:"none",borderRadius:e.borderRadiusLG,"&:focus-visible":{outline:`${de(e.lineWidthFocus)} solid ${e.colorPrimaryBorder}`}},[`${t}-drag-container`]:{display:"table-cell",verticalAlign:"middle"},[` - &:not(${t}-disabled):hover, - &-hover:not(${t}-disabled) - `]:{borderColor:e.colorPrimaryHover},[`p${t}-drag-icon`]:{marginBottom:e.margin,[n]:{color:e.colorPrimary,fontSize:e.uploadThumbnailSize}},[`p${t}-text`]:{margin:`0 0 ${de(e.marginXXS)}`,color:e.colorTextHeading,fontSize:e.fontSizeLG},[`p${t}-hint`]:{color:e.colorTextDescription,fontSize:e.fontSize},[`&${t}-disabled`]:{[`p${t}-drag-icon ${n}, - p${t}-text, - p${t}-hint - `]:{color:e.colorTextDisabled}}}}}},rZ=e=>{const{componentCls:t,antCls:n,iconCls:r,fontSize:o,lineHeight:i,calc:a}=e,s=`${t}-list-item`,c=`${s}-actions`,u=`${s}-action`,p=e.fontHeightSM;return{[`${t}-wrapper`]:{[`${t}-list`]:Object.assign(Object.assign({},Ps()),{lineHeight:e.lineHeight,[s]:{position:"relative",height:a(e.lineHeight).mul(o).equal(),marginTop:e.marginXS,fontSize:o,display:"flex",alignItems:"center",transition:`background-color ${e.motionDurationSlow}`,"&:hover":{backgroundColor:e.controlItemBgHover},[`${s}-name`]:Object.assign(Object.assign({},Ka),{padding:`0 ${de(e.paddingXS)}`,lineHeight:i,flex:"auto",transition:`all ${e.motionDurationSlow}`}),[c]:{whiteSpace:"nowrap",[u]:{opacity:0},[r]:{color:e.actionsColor,transition:`all ${e.motionDurationSlow}`},[` - ${u}:focus-visible, - &.picture ${u} - `]:{opacity:1},[`${u}${n}-btn`]:{height:p,border:0,lineHeight:1}},[`${t}-icon ${r}`]:{color:e.colorTextDescription,fontSize:o},[`${s}-progress`]:{position:"absolute",bottom:e.calc(e.uploadProgressOffset).mul(-1).equal(),width:"100%",paddingInlineStart:a(o).add(e.paddingXS).equal(),fontSize:o,lineHeight:0,pointerEvents:"none","> div":{margin:0}}},[`${s}:hover ${u}`]:{opacity:1},[`${s}-error`]:{color:e.colorError,[`${s}-name, ${t}-icon ${r}`]:{color:e.colorError},[c]:{[`${r}, ${r}:hover`]:{color:e.colorError},[u]:{opacity:1}}},[`${t}-list-item-container`]:{transition:`opacity ${e.motionDurationSlow}, height ${e.motionDurationSlow}`,"&::before":{display:"table",width:0,height:0,content:'""'}}})}}},oZ=e=>{const{componentCls:t}=e,n=new fn("uploadAnimateInlineIn",{from:{width:0,height:0,padding:0,opacity:0,margin:e.calc(e.marginXS).div(-2).equal()}}),r=new fn("uploadAnimateInlineOut",{to:{width:0,height:0,padding:0,opacity:0,margin:e.calc(e.marginXS).div(-2).equal()}}),o=`${t}-animate-inline`;return[{[`${t}-wrapper`]:{[`${o}-appear, ${o}-enter, ${o}-leave`]:{animationDuration:e.motionDurationSlow,animationTimingFunction:e.motionEaseInOutCirc,animationFillMode:"forwards"},[`${o}-appear, ${o}-enter`]:{animationName:n},[`${o}-leave`]:{animationName:r}}},{[`${t}-wrapper`]:aT(e)},n,r]},iZ=e=>{const{componentCls:t,iconCls:n,uploadThumbnailSize:r,uploadProgressOffset:o,calc:i}=e,a=`${t}-list`,s=`${a}-item`;return{[`${t}-wrapper`]:{[` - ${a}${a}-picture, - ${a}${a}-picture-card, - ${a}${a}-picture-circle - `]:{[s]:{position:"relative",height:i(r).add(i(e.lineWidth).mul(2)).add(i(e.paddingXS).mul(2)).equal(),padding:e.paddingXS,border:`${de(e.lineWidth)} ${e.lineType} ${e.colorBorder}`,borderRadius:e.borderRadiusLG,"&:hover":{background:"transparent"},[`${s}-thumbnail`]:Object.assign(Object.assign({},Ka),{width:r,height:r,lineHeight:de(i(r).add(e.paddingSM).equal()),textAlign:"center",flex:"none",[n]:{fontSize:e.fontSizeHeading2,color:e.colorPrimary},img:{display:"block",width:"100%",height:"100%",overflow:"hidden"}}),[`${s}-progress`]:{bottom:o,width:`calc(100% - ${de(i(e.paddingSM).mul(2).equal())})`,marginTop:0,paddingInlineStart:i(r).add(e.paddingXS).equal()}},[`${s}-error`]:{borderColor:e.colorError,[`${s}-thumbnail ${n}`]:{[`svg path[fill='${Kl[0]}']`]:{fill:e.colorErrorBg},[`svg path[fill='${Kl.primary}']`]:{fill:e.colorError}}},[`${s}-uploading`]:{borderStyle:"dashed",[`${s}-name`]:{marginBottom:o}}},[`${a}${a}-picture-circle ${s}`]:{[`&, &::before, ${s}-thumbnail`]:{borderRadius:"50%"}}}}},aZ=e=>{const{componentCls:t,iconCls:n,fontSizeLG:r,colorTextLightSolid:o,calc:i}=e,a=`${t}-list`,s=`${a}-item`,c=e.uploadPicCardSize;return{[` - ${t}-wrapper${t}-picture-card-wrapper, - ${t}-wrapper${t}-picture-circle-wrapper - `]:Object.assign(Object.assign({},Ps()),{display:"block",[`${t}${t}-select`]:{width:c,height:c,textAlign:"center",verticalAlign:"top",backgroundColor:e.colorFillAlter,border:`${de(e.lineWidth)} dashed ${e.colorBorder}`,borderRadius:e.borderRadiusLG,cursor:"pointer",transition:`border-color ${e.motionDurationSlow}`,[`> ${t}`]:{display:"flex",alignItems:"center",justifyContent:"center",height:"100%",textAlign:"center"},[`&:not(${t}-disabled):hover`]:{borderColor:e.colorPrimary}},[`${a}${a}-picture-card, ${a}${a}-picture-circle`]:{display:"flex",flexWrap:"wrap","@supports not (gap: 1px)":{"& > *":{marginBlockEnd:e.marginXS,marginInlineEnd:e.marginXS}},"@supports (gap: 1px)":{gap:e.marginXS},[`${a}-item-container`]:{display:"inline-block",width:c,height:c,verticalAlign:"top"},"&::after":{display:"none"},"&::before":{display:"none"},[s]:{height:"100%",margin:0,"&::before":{position:"absolute",zIndex:1,width:`calc(100% - ${de(i(e.paddingXS).mul(2).equal())})`,height:`calc(100% - ${de(i(e.paddingXS).mul(2).equal())})`,backgroundColor:e.colorBgMask,opacity:0,transition:`all ${e.motionDurationSlow}`,content:'" "'}},[`${s}:hover`]:{[`&::before, ${s}-actions`]:{opacity:1}},[`${s}-actions`]:{position:"absolute",insetInlineStart:0,zIndex:10,width:"100%",whiteSpace:"nowrap",textAlign:"center",opacity:0,transition:`all ${e.motionDurationSlow}`,[` - ${n}-eye, - ${n}-download, - ${n}-delete - `]:{zIndex:10,width:r,margin:`0 ${de(e.marginXXS)}`,fontSize:r,cursor:"pointer",transition:`all ${e.motionDurationSlow}`,color:o,"&:hover":{color:o},svg:{verticalAlign:"baseline"}}},[`${s}-thumbnail, ${s}-thumbnail img`]:{position:"static",display:"block",width:"100%",height:"100%",objectFit:"contain"},[`${s}-name`]:{display:"none",textAlign:"center"},[`${s}-file + ${s}-name`]:{position:"absolute",bottom:e.margin,display:"block",width:`calc(100% - ${de(i(e.paddingXS).mul(2).equal())})`},[`${s}-uploading`]:{[`&${s}`]:{backgroundColor:e.colorFillAlter},[`&::before, ${n}-eye, ${n}-download, ${n}-delete`]:{display:"none"}},[`${s}-progress`]:{bottom:e.marginXL,width:`calc(100% - ${de(i(e.paddingXS).mul(2).equal())})`,paddingInlineStart:0}}}),[`${t}-wrapper${t}-picture-circle-wrapper`]:{[`${t}${t}-select`]:{borderRadius:"50%"}}}},sZ=e=>{const{componentCls:t}=e;return{[`${t}-rtl`]:{direction:"rtl"}}},lZ=e=>{const{componentCls:t,colorTextDisabled:n}=e;return{[`${t}-wrapper`]:Object.assign(Object.assign({},jn(e)),{[t]:{outline:0,"input[type='file']":{cursor:"pointer"}},[`${t}-select`]:{display:"inline-block"},[`${t}-disabled`]:{color:n,cursor:"not-allowed"}})}},cZ=e=>({actionsColor:e.colorTextDescription}),uZ=In("Upload",e=>{const{fontSizeHeading3:t,fontHeight:n,lineWidth:r,controlHeightLG:o,calc:i}=e,a=vn(e,{uploadThumbnailSize:i(t).mul(2).equal(),uploadProgressOffset:i(i(n).div(2)).add(r).equal(),uploadPicCardSize:i(o).mul(2.55).equal()});return[lZ(a),nZ(a),iZ(a),aZ(a),rZ(a),oZ(a),sZ(a),zv(a)]},cZ);function ip(e){return Object.assign(Object.assign({},e),{lastModified:e.lastModified,lastModifiedDate:e.lastModifiedDate,name:e.name,size:e.size,type:e.type,uid:e.uid,percent:0,originFileObj:e})}function ap(e,t){const n=Se(t),r=n.findIndex(o=>{let{uid:i}=o;return i===e.uid});return r===-1?n.push(e):n[r]=e,n}function n0(e,t){const n=e.uid!==void 0?"uid":"name";return t.filter(r=>r[n]===e[n])[0]}function dZ(e,t){const n=e.uid!==void 0?"uid":"name",r=t.filter(o=>o[n]!==e[n]);return r.length===t.length?null:r}const fZ=function(){const t=(arguments.length>0&&arguments[0]!==void 0?arguments[0]:"").split("/"),r=t[t.length-1].split(/#|\?/)[0];return(/\.[^./\\]*$/.exec(r)||[""])[0]},jN=e=>e.indexOf("image/")===0,pZ=e=>{if(e.type&&!e.thumbUrl)return jN(e.type);const t=e.thumbUrl||e.url||"",n=fZ(t);return/^data:image\//.test(t)||/(webp|svg|png|gif|jpg|jpeg|jfif|bmp|dpg|ico|heic|heif)$/i.test(n)?!0:!(/^data:/.test(t)||n)},Ia=200;function vZ(e){return new Promise(t=>{if(!e.type||!jN(e.type)){t("");return}const n=document.createElement("canvas");n.width=Ia,n.height=Ia,n.style.cssText=`position: fixed; left: 0; top: 0; width: ${Ia}px; height: ${Ia}px; z-index: 9999; display: none;`,document.body.appendChild(n);const r=n.getContext("2d"),o=new Image;if(o.onload=()=>{const{width:i,height:a}=o;let s=Ia,c=Ia,u=0,p=0;i>a?(c=a*(Ia/i),p=-(c-s)/2):(s=i*(Ia/a),u=-(s-c)/2),r.drawImage(o,u,p,s,c);const v=n.toDataURL();document.body.removeChild(n),window.URL.revokeObjectURL(o.src),t(v)},o.crossOrigin="anonymous",e.type.startsWith("image/svg+xml")){const i=new FileReader;i.onload=()=>{i.result&&typeof i.result=="string"&&(o.src=i.result)},i.readAsDataURL(e)}else if(e.type.startsWith("image/gif")){const i=new FileReader;i.onload=()=>{i.result&&t(i.result)},i.readAsDataURL(e)}else o.src=window.URL.createObjectURL(e)})}const hZ=d.forwardRef((e,t)=>{let{prefixCls:n,className:r,style:o,locale:i,listType:a,file:s,items:c,progress:u,iconRender:p,actionIconRender:v,itemRender:h,isImgUrl:m,showPreviewIcon:b,showRemoveIcon:y,showDownloadIcon:w,previewIcon:C,removeIcon:S,downloadIcon:E,extra:k,onPreview:O,onDownload:$,onClose:T}=e;var M,P;const{status:R}=s,[A,V]=d.useState(R);d.useEffect(()=>{R!=="removed"&&V(R)},[R]);const[z,B]=d.useState(!1);d.useEffect(()=>{const ee=setTimeout(()=>{B(!0)},300);return()=>{clearTimeout(ee)}},[]);const _=p(s);let H=d.createElement("div",{className:`${n}-icon`},_);if(a==="picture"||a==="picture-card"||a==="picture-circle")if(A==="uploading"||!s.thumbUrl&&!s.url){const ee=ie(`${n}-list-item-thumbnail`,{[`${n}-list-item-file`]:A!=="uploading"});H=d.createElement("div",{className:ee},_)}else{const ee=m!=null&&m(s)?d.createElement("img",{src:s.thumbUrl||s.url,alt:s.name,className:`${n}-list-item-image`,crossOrigin:s.crossOrigin}):_,re=ie(`${n}-list-item-thumbnail`,{[`${n}-list-item-file`]:m&&!m(s)});H=d.createElement("a",{className:re,onClick:le=>O(s,le),href:s.url||s.thumbUrl,target:"_blank",rel:"noopener noreferrer"},ee)}const j=ie(`${n}-list-item`,`${n}-list-item-${A}`),L=typeof s.linkProps=="string"?JSON.parse(s.linkProps):s.linkProps,F=(typeof y=="function"?y(s):y)?v((typeof S=="function"?S(s):S)||d.createElement(Eq,null),()=>T(s),n,i.removeFile,!0):null,U=(typeof w=="function"?w(s):w)&&A==="done"?v((typeof E=="function"?E(s):E)||d.createElement($q,null),()=>$(s),n,i.downloadFile):null,D=a!=="picture-card"&&a!=="picture-circle"&&d.createElement("span",{key:"download-delete",className:ie(`${n}-list-item-actions`,{picture:a==="picture"})},U,F),W=typeof k=="function"?k(s):k,G=W&&d.createElement("span",{className:`${n}-list-item-extra`},W),q=ie(`${n}-list-item-name`),J=s.url?d.createElement("a",Object.assign({key:"view",target:"_blank",rel:"noopener noreferrer",className:q,title:s.name},L,{href:s.url,onClick:ee=>O(s,ee)}),s.name,G):d.createElement("span",{key:"view",className:q,onClick:ee=>O(s,ee),title:s.name},s.name,G),Y=(typeof b=="function"?b(s):b)&&(s.url||s.thumbUrl)?d.createElement("a",{href:s.url||s.thumbUrl,target:"_blank",rel:"noopener noreferrer",onClick:ee=>O(s,ee),title:i.previewFile},typeof C=="function"?C(s):C||d.createElement(IM,null)):null,Q=(a==="picture-card"||a==="picture-circle")&&A!=="uploading"&&d.createElement("span",{className:`${n}-list-item-actions`},Y,A==="done"&&U,F),{getPrefixCls:te}=d.useContext(ht),ce=te(),se=d.createElement("div",{className:j},H,J,D,Q,z&&d.createElement(Xo,{motionName:`${ce}-fade`,visible:A==="uploading",motionDeadline:2e3},ee=>{let{className:re}=ee;const le="percent"in s?d.createElement(oq,Object.assign({},u,{type:"line",percent:s.percent,"aria-label":s["aria-label"],"aria-labelledby":s["aria-labelledby"]})):null;return d.createElement("div",{className:ie(`${n}-list-item-progress`,re)},le)})),ne=s.response&&typeof s.response=="string"?s.response:((M=s.error)===null||M===void 0?void 0:M.statusText)||((P=s.error)===null||P===void 0?void 0:P.message)||i.uploadError,ae=A==="error"?d.createElement(gi,{title:ne,getPopupContainer:ee=>ee.parentNode},se):se;return d.createElement("div",{className:ie(`${n}-list-item-container`,r),style:o,ref:t},h?h(ae,s,c,{download:$.bind(null,s),preview:O.bind(null,s),remove:T.bind(null,s)}):ae)}),gZ=(e,t)=>{const{listType:n="text",previewFile:r=vZ,onPreview:o,onDownload:i,onRemove:a,locale:s,iconRender:c,isImageUrl:u=pZ,prefixCls:p,items:v=[],showPreviewIcon:h=!0,showRemoveIcon:m=!0,showDownloadIcon:b=!1,removeIcon:y,previewIcon:w,downloadIcon:C,extra:S,progress:E={size:[-1,2],showInfo:!1},appendAction:k,appendActionVisible:O=!0,itemRender:$,disabled:T}=e,M=kw(),[P,R]=d.useState(!1),A=["picture-card","picture-circle"].includes(n);d.useEffect(()=>{n.startsWith("picture")&&(v||[]).forEach(G=>{!(G.originFileObj instanceof File||G.originFileObj instanceof Blob)||G.thumbUrl!==void 0||(G.thumbUrl="",r==null||r(G.originFileObj).then(q=>{G.thumbUrl=q||"",M()}))})},[n,v,r]),d.useEffect(()=>{R(!0)},[]);const V=(G,q)=>{if(o)return q.preventDefault(),o(G)},z=G=>{typeof i=="function"?i(G):G.url&&window.open(G.url)},B=G=>{a==null||a(G)},_=G=>{if(c)return c(G,n);const q=G.status==="uploading";if(n.startsWith("picture")){const J=n==="picture"?d.createElement(Xa,null):s.uploading,Y=u!=null&&u(G)?d.createElement(dX,null):d.createElement(Fq,null);return q?J:Y}return q?d.createElement(Xa,null):d.createElement(lX,null)},H=(G,q,J,Y,Q)=>{const te={type:"text",size:"small",title:Y,onClick:ce=>{var se,ne;q(),d.isValidElement(G)&&((ne=(se=G.props).onClick)===null||ne===void 0||ne.call(se,ce))},className:`${J}-list-item-action`};return Q&&(te.disabled=T),d.isValidElement(G)?d.createElement(jr,Object.assign({},te,{icon:Dr(G,Object.assign(Object.assign({},G.props),{onClick:()=>{}}))})):d.createElement(jr,Object.assign({},te),d.createElement("span",null,G))};d.useImperativeHandle(t,()=>({handlePreview:V,handleDownload:z}));const{getPrefixCls:j}=d.useContext(ht),L=j("upload",p),F=j(),U=ie(`${L}-list`,`${L}-list-${n}`),D=d.useMemo(()=>Ln(Ju(F),["onAppearEnd","onEnterEnd","onLeaveEnd"]),[F]),W=Object.assign(Object.assign({},A?{}:D),{motionDeadline:2e3,motionName:`${L}-${A?"animate-inline":"animate"}`,keys:Se(v.map(G=>({key:G.uid,file:G}))),motionAppear:P});return d.createElement("div",{className:U},d.createElement(VI,Object.assign({},W,{component:!1}),G=>{let{key:q,file:J,className:Y,style:Q}=G;return d.createElement(hZ,{key:q,locale:s,prefixCls:L,className:Y,style:Q,file:J,items:v,progress:E,listType:n,isImgUrl:u,showPreviewIcon:h,showRemoveIcon:m,showDownloadIcon:b,removeIcon:y,previewIcon:w,downloadIcon:C,extra:S,iconRender:_,actionIconRender:H,itemRender:$,onPreview:V,onDownload:z,onClose:B})}),k&&d.createElement(Xo,Object.assign({},W,{visible:O,forceRender:!0}),G=>{let{className:q,style:J}=G;return Dr(k,Y=>({className:ie(Y.className,q),style:Object.assign(Object.assign(Object.assign({},J),{pointerEvents:q?"none":void 0}),Y.style)}))}))},mZ=d.forwardRef(gZ);var bZ=function(e,t,n,r){function o(i){return i instanceof n?i:new n(function(a){a(i)})}return new(n||(n=Promise))(function(i,a){function s(p){try{u(r.next(p))}catch(v){a(v)}}function c(p){try{u(r.throw(p))}catch(v){a(v)}}function u(p){p.done?i(p.value):o(p.value).then(s,c)}u((r=r.apply(e,t||[])).next())})};const hu=`__LIST_IGNORE_${Date.now()}__`,yZ=(e,t)=>{const{fileList:n,defaultFileList:r,onRemove:o,showUploadList:i=!0,listType:a="text",onPreview:s,onDownload:c,onChange:u,onDrop:p,previewFile:v,disabled:h,locale:m,iconRender:b,isImageUrl:y,progress:w,prefixCls:C,className:S,type:E="select",children:k,style:O,itemRender:$,maxCount:T,data:M={},multiple:P=!1,hasControlInside:R=!0,action:A="",accept:V="",supportServerRender:z=!0,rootClassName:B}=e,_=d.useContext(So),H=h??_,[j,L]=Dn(r||[],{value:n,postState:we=>we??[]}),[F,U]=d.useState("drop"),D=d.useRef(null),W=d.useRef(null);d.useMemo(()=>{const we=Date.now();(n||[]).forEach((ze,Me)=>{!ze.uid&&!Object.isFrozen(ze)&&(ze.uid=`__AUTO__${we}_${Me}__`)})},[n]);const G=(we,ze,Me)=>{let Pe=Se(ze),Ke=!1;T===1?Pe=Pe.slice(-1):T&&(Ke=Pe.length>T,Pe=Pe.slice(0,T)),pi.flushSync(()=>{L(Pe)});const St={file:we,fileList:Pe};Me&&(St.event=Me),(!Ke||we.status==="removed"||Pe.some(Ft=>Ft.uid===we.uid))&&pi.flushSync(()=>{u==null||u(St)})},q=(we,ze)=>bZ(void 0,void 0,void 0,function*(){const{beforeUpload:Me,transformFile:Pe}=e;let Ke=we;if(Me){const St=yield Me(we,ze);if(St===!1)return!1;if(delete we[hu],St===hu)return Object.defineProperty(we,hu,{value:!0,configurable:!0}),!1;typeof St=="object"&&St&&(Ke=St)}return Pe&&(Ke=yield Pe(Ke)),Ke}),J=we=>{const ze=we.filter(Ke=>!Ke.file[hu]);if(!ze.length)return;const Me=ze.map(Ke=>ip(Ke.file));let Pe=Se(j);Me.forEach(Ke=>{Pe=ap(Ke,Pe)}),Me.forEach((Ke,St)=>{let Ft=Ke;if(ze[St].parsedFile)Ke.status="uploading";else{const{originFileObj:Lt}=Ke;let Ct;try{Ct=new File([Lt],Lt.name,{type:Lt.type})}catch{Ct=new Blob([Lt],{type:Lt.type}),Ct.name=Lt.name,Ct.lastModifiedDate=new Date,Ct.lastModified=new Date().getTime()}Ct.uid=Ke.uid,Ft=Ct}G(Ft,Pe)})},Y=(we,ze,Me)=>{try{typeof we=="string"&&(we=JSON.parse(we))}catch{}if(!n0(ze,j))return;const Pe=ip(ze);Pe.status="done",Pe.percent=100,Pe.response=we,Pe.xhr=Me;const Ke=ap(Pe,j);G(Pe,Ke)},Q=(we,ze)=>{if(!n0(ze,j))return;const Me=ip(ze);Me.status="uploading",Me.percent=we.percent;const Pe=ap(Me,j);G(Me,Pe,we)},te=(we,ze,Me)=>{if(!n0(Me,j))return;const Pe=ip(Me);Pe.error=we,Pe.response=ze,Pe.status="error";const Ke=ap(Pe,j);G(Pe,Ke)},ce=we=>{let ze;Promise.resolve(typeof o=="function"?o(we):o).then(Me=>{var Pe;if(Me===!1)return;const Ke=dZ(we,j);Ke&&(ze=Object.assign(Object.assign({},we),{status:"removed"}),j==null||j.forEach(St=>{const Ft=ze.uid!==void 0?"uid":"name";St[Ft]===ze[Ft]&&!Object.isFrozen(St)&&(St.status="removed")}),(Pe=D.current)===null||Pe===void 0||Pe.abort(ze),G(ze,Ke))})},se=we=>{U(we.type),we.type==="drop"&&(p==null||p(we))};d.useImperativeHandle(t,()=>({onBatchStart:J,onSuccess:Y,onProgress:Q,onError:te,fileList:j,upload:D.current,nativeElement:W.current}));const{getPrefixCls:ne,direction:ae,upload:ee}=d.useContext(ht),re=ne("upload",C),le=Object.assign(Object.assign({onBatchStart:J,onError:te,onProgress:Q,onSuccess:Y},e),{data:M,multiple:P,action:A,accept:V,supportServerRender:z,prefixCls:re,disabled:H,beforeUpload:q,onChange:void 0,hasControlInside:R});delete le.className,delete le.style,(!k||H)&&delete le.id;const pe=`${re}-wrapper`,[Oe,ge,Re]=uZ(re,pe),[ye]=bi("Upload",hi.Upload),{showRemoveIcon:Te,showPreviewIcon:Ae,showDownloadIcon:me,removeIcon:Ie,previewIcon:Le,downloadIcon:Be,extra:et}=typeof i=="boolean"?{}:i,rt=typeof Te>"u"?!H:Te,Ze=(we,ze)=>i?d.createElement(mZ,{prefixCls:re,listType:a,items:j,previewFile:v,onPreview:s,onDownload:c,onRemove:ce,showRemoveIcon:rt,showPreviewIcon:Ae,showDownloadIcon:me,removeIcon:Ie,previewIcon:Le,downloadIcon:Be,iconRender:b,extra:et,locale:Object.assign(Object.assign({},ye),m),isImageUrl:y,progress:w,appendAction:we,appendActionVisible:ze,itemRender:$,disabled:H}):we,Ve=ie(pe,S,B,ge,Re,ee==null?void 0:ee.className,{[`${re}-rtl`]:ae==="rtl",[`${re}-picture-card-wrapper`]:a==="picture-card",[`${re}-picture-circle-wrapper`]:a==="picture-circle"}),Ye=Object.assign(Object.assign({},ee==null?void 0:ee.style),O);if(E==="drag"){const we=ie(ge,re,`${re}-drag`,{[`${re}-drag-uploading`]:j.some(ze=>ze.status==="uploading"),[`${re}-drag-hover`]:F==="dragover",[`${re}-disabled`]:H,[`${re}-rtl`]:ae==="rtl"});return Oe(d.createElement("span",{className:Ve,ref:W},d.createElement("div",{className:we,style:Ye,onDrop:se,onDragOver:se,onDragLeave:se},d.createElement(qb,Object.assign({},le,{ref:D,className:`${re}-btn`}),d.createElement("div",{className:`${re}-drag-container`},k))),Ze()))}const Ge=ie(re,`${re}-select`,{[`${re}-disabled`]:H}),Fe=d.createElement("div",{className:Ge,style:k?void 0:{display:"none"}},d.createElement(qb,Object.assign({},le,{ref:D})));return Oe(a==="picture-card"||a==="picture-circle"?d.createElement("span",{className:Ve,ref:W},Ze(Fe,!!k)):d.createElement("span",{className:Ve,ref:W},Fe,Ze()))},LN=d.forwardRef(yZ);var wZ=function(e,t){var n={};for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&typeof Object.getOwnPropertySymbols=="function")for(var o=0,r=Object.getOwnPropertySymbols(e);o{var{style:n,height:r,hasControlInside:o=!1}=e,i=wZ(e,["style","height","hasControlInside"]);return d.createElement(LN,Object.assign({ref:t,hasControlInside:o},i,{type:"drag",style:Object.assign(Object.assign({},n),{height:r})}))}),a1=LN;a1.Dragger=xZ;a1.LIST_IGNORE=hu;var r0=d,SZ=function(e){return typeof e=="function"},CZ=function(e){var t=r0.useState(e),n=t[0],r=t[1],o=r0.useRef(n),i=r0.useCallback(function(a){o.current=SZ(a)?a(o.current):a,r(o.current)},[]);return[n,i,o]},EZ=CZ;const ou=js(EZ);function Dd(e){let t=0;for(let n=0;n>r*8&255;n+=("00"+o.toString(16)).substr(-2)}return n}function OZ(e){const t=[],n=new Set;for(let r=0;r<(e==null?void 0:e.length);r++)e[r].tags.split(" ").forEach(o=>{n.add(o)});for(const r of n)t.push({text:r,value:r});return{tagsOptions:t}}function $Z(e){return/https?:\/\/(www\.)?[-a-zA-Z0-9@:%._\+~#=]{1,256}\.[a-zA-Z0-9()]{1,6}\b([-a-zA-Z0-9()!@:%_\+.~#?&\/\/=]*)/.test(e)}async function BN(e){var t;try{const n=await ke.request({url:e}),r=new DOMParser().parseFromString(n,"text/html"),o=(t=r.querySelector("title"))==null?void 0:t.innerText,i=r.querySelector("meta[name=name]"),a=i?i.getAttribute("content"):null,s=r.querySelector("meta[name=description]"),c=s?s.getAttribute("content"):null;return{title:o||"",name:a,description:c}}catch{return{title:"",name:"",description:""}}}async function IZ(e){return e.startsWith("http")||e.startsWith("https")||(e="https://"+e),BN(e)}function TO(e){const t=TZ(e);return t.unshift({value:"ROOT",text:"ROOT",label:"ROOT",children:[]}),t}function TZ(e){const t=[],n=e.split(` -`),r=/^(\s*)-\s(.*)/;return n.forEach(function(o,i){var s;const a=o.match(r);if(a){let c;const u=a[1];new RegExp(/^\t+/g).test(u)?c=u.length:c=u.length/4;const p=a[2],v=MZ(p);if(c===0)t.push(v);else{const h=PZ(c,t);(s=h==null?void 0:h.children)==null||s.push(v)}}}),t}function PZ(e,t){let n=0,r=t[t.length-1];for(;n!u||!c[0]?null:c.map((p,v)=>{const h=u[v];return h!=null&&h.value?v===c.length-1?it.jsx("span",{children:p},h==null?void 0:h.value):it.jsxs("span",{children:[p," / "]},h==null?void 0:h.value):it.jsx("span",{children:p},"ROOT")});e.bookmark&&e.bookmark.id&&(e.bookmark.modified=Date.now());const o=async(c,u)=>{const p=u.find(h=>h.name[0]==="url"),v=u.find(h=>h.name[0]==="name");if(!(v!=null&&v.value)&&p&&$Z(p.value))try{const{title:h,description:m}=await IZ(p.value);h&&m&&(t.setFieldValue("name",h),t.setFieldValue("description",m))}catch{}},i=c=>{var p,v;const u={id:String(Dd(c.url)),name:c.name,url:c.url,description:c.description,category:c.category,tags:c.tags.join(" "),created:ke.moment(c.created,"YYYY-MM-DD HH:mm").valueOf(),modified:ke.moment(c.modified,"YYYY-MM-DD HH:mm").valueOf()};(p=e.bookmark)!=null&&p.id?e.handleSaveBookmark(u,(v=e.bookmark)==null?void 0:v.id):e.handleSaveBookmark(u,""),t.resetFields()},a=()=>{t.resetFields()};return it.jsxs(Or,{form:t,onFinish:i,onFieldsChange:o,name:"bookmark",children:[it.jsx(Or.Item,{label:"Name",name:"name",rules:[{required:!0,message:"Please input BookMarkBar name!"}],initialValue:e.bookmark.name,shouldUpdate:!0,children:it.jsx(Fo,{})}),it.jsx(Or.Item,{label:"URL",name:"url",initialValue:e.bookmark.url,rules:[{type:"url",required:!0,message:"Please input BookMarkBar url!"}],shouldUpdate:!0,children:it.jsx(Fo,{})}),it.jsx(Or.Item,{label:"Description",name:"description",initialValue:e.bookmark.description,rules:[{required:!1,message:"Please input the description!"}],shouldUpdate:!0,children:it.jsx(Fo.TextArea,{})}),it.jsx(Or.Item,{label:"Tags",name:"tags",initialValue:e.bookmark.tags?(s=e.bookmark.tags)==null?void 0:s.split(" "):[],rules:[{required:!1,message:"Please input the tags!"}],shouldUpdate:!0,children:it.jsx(yi,{mode:"tags",placeholder:"Please select tags",allowClear:!0,children:n.map((c,u)=>it.jsx(NZ,{value:c.value,children:c.value},`${c.value}-${u}`))})}),it.jsx(Or.Item,{label:"Category",name:"category",initialValue:e.bookmark.category?e.bookmark.category:"",rules:[{required:!1,message:"Please select the category!"}],shouldUpdate:!0,children:it.jsx(hc,{displayRender:r,options:e.categories,changeOnSelect:!0})}),it.jsx(Or.Item,{label:"Created Time",name:"created",initialValue:ke.moment(e.bookmark.created).format("YYYY-MM-DD HH:mm"),rules:[{required:!0,message:"Please select the created time!"}],shouldUpdate:!0,children:it.jsx(Fo,{})}),it.jsx(Or.Item,{label:"Modified Time",name:"modified",initialValue:ke.moment(e.bookmark.modified).format("YYYY-MM-DD HH:mm"),rules:[{required:!0,message:"Please select the modified time!"}],shouldUpdate:!0,children:it.jsx(Fo,{})}),it.jsx(Or.Item,{children:it.jsxs("div",{className:"submit-bar",style:{textAlign:"end"},children:[it.jsx(jr,{className:"wb-reset-button",htmlType:"button",onClick:a,children:"Reset"}),it.jsx(jr,{type:"primary",htmlType:"submit",children:"Submit"})]})})]})}const DZ=e=>{const t=async(r,o)=>{if(/^(https?:\/\/)?(www\.)?[-a-zA-Z0-9@:%._\+~#?&//=]{1,256}\.[a-zA-Z0-9()]{1,6}\b([-a-zA-Z0-9()@:%_\+.~#?&//=]*)$/g.test(r.url))try{o.some(s=>s.url===r.url)||o.unshift(r)}catch{}},n={action:"",listType:"text",beforeUpload(r){return new Promise(o=>{const i=new FileReader;i.readAsText(r,"utf-8"),i.onload=async()=>{const a=i.result;if(!a)return;const s=/
(.*)<\/A>/gm;let c;const{bookmarks:u,categories:p}=await Ua(this.plugin);for(;(c=s.exec(a))!==null;){c.index===s.lastIndex&&s.lastIndex++;const v=this.plugin.settings.bookmarkManager.defaultCategory.split(",").map(m=>m.trim()),h={id:String(Dd(c[1])),name:c[3],url:c[1],description:"",category:v.length>0?v:[""],tags:"",created:ke.moment(c[2],"X").valueOf()??ke.moment().valueOf(),modified:ke.moment(c[2],"X").valueOf()??ke.moment().valueOf()};try{await t(h,u)}catch{new ke.Notice(`import ${h.name} faield`)}}await Il(this.plugin,{bookmarks:u,categories:p}),await e.handleImportFinished(u)},i.onloadend=async()=>{new ke.Notice("Import successfully!!!")}})}};return it.jsx(a1,{...n,children:it.jsx(jr,{children:"Import"})})},jZ=["name","description","url","category","tags","created","modified"],iu={id:"",name:"",description:"",url:"",tags:"",category:[""],created:ke.moment().valueOf(),modified:ke.moment().valueOf()};function LZ(e){const[t,n,r]=ou(e.bookmarks),[o,i]=d.useState(e.categories),a=OZ(t),[s,c]=d.useState(iu),[u,p]=d.useState(""),[v,h]=d.useState(1),[m,b,y]=ou({tags:null}),[w,C,S]=ou({category:null}),[E,k,O]=ou({order:"descend"}),$=[{title:je("Name"),dataIndex:"name",key:"name",render:(Y,Q)=>it.jsx("a",{href:Q.url,onClick:te=>{if(te.preventDefault(),te.ctrlKey||te.metaKey){window.open(Q.url,"_blank","external");return}on.spawnWebBrowserView(e.plugin,!0,{url:Q.url})},children:Y}),showSorterTooltip:!1,sorter:(Y,Q)=>Y.name.localeCompare(Q.name),sortOrder:E.columnKey==="name"?E.order:null},{title:je("Description"),dataIndex:"description",key:"description",onFilter:(Y,Q)=>Q.description.indexOf(Y)===0},{title:je("Url"),dataIndex:"url",key:"url"},{title:je("Category"),dataIndex:"category",key:"category",render:Y=>Y[0]===""?it.jsx("p",{}):it.jsx("p",{children:Y.join(">")}),filters:TO(e.plugin.settings.bookmarkManager.category),filterMode:e.plugin.settings.bookmarkManager.defaultFilterType,filterSearch:!0,onFilter:(Y,Q)=>Q.category.includes(Y)||Y==="ROOT"&&!e.plugin.settings.bookmarkManager.category.contains(Q.category[0]?Q.category[0]:"")},{title:je("Tags"),dataIndex:"tags",key:"tags",render:Y=>Y?Y.split(" ").map(Q=>{const te=kZ(Q);return it.jsx(NN,{color:te,onClick:()=>{let ce=null;y.current.tags?(ce=y.current.tags.slice(),ce.contains(Q)||(ce=[...ce,Q])):ce=[Q],b({...y.current,tags:ce})},children:Q.toUpperCase()},Q)}):"",filters:a.tagsOptions,onFilter:(Y,Q)=>Y===""?Q.tags==="":Q.tags.indexOf(Y)===0},{title:je("Created"),dataIndex:"created",key:"created",render:Y=>it.jsx("p",{children:ke.moment(Y).format("YYYY-MM-DD")}),sorter:(Y,Q)=>Y.created-Q.created,sortOrder:E.columnKey==="created"?E.order:null},{title:je("Modified"),dataIndex:"modified",key:"modified",render:Y=>it.jsx("p",{children:ke.moment(Y).format("YYYY-MM-DD")}),sorter:(Y,Q)=>Y.modified-Q.modified,sortOrder:E.columnKey==="modified"?E.order:null},{title:je("Action"),dataIndex:"action",key:"action",render:(Y,Q)=>it.jsxs(Ww,{size:"middle",children:[it.jsx("a",{onClick:()=>{c(Q),H(!0)},children:"Edit"}),it.jsx(ZM,{title:"Are you sure to delete this bookmark?",onConfirm:()=>{U(Q)},onCancel:()=>{},okText:"Yes",cancelText:"No",children:it.jsx("a",{href:"#",children:"Delete"})})]})}],[T,M]=d.useState(e.plugin.settings.bookmarkManager.defaultColumnList),[P,R,A]=ou($.filter(Y=>T.includes(Y.key)||Y.key==="action")),V=(Y,Q,te)=>{k(te),Q.tags!==void 0?b(Q):Q.category!==void 0&&C(Q)};d.useEffect(()=>()=>{R(A.current.map(Y=>Y.key===O.current.columnKey?{...Y,sortOrder:O.current.order}:Y.key=="tags"?{...Y,filteredValue:y.current.tags}:Y.key=="category"?{...Y,filteredValue:y.current.category}:Y))},[m,w,E]);const z=Ns.Group,B=async Y=>{const Q=$.filter(te=>Y.includes(te.key)||te.key==="action");R(Q),M(Y),e.plugin.settings.bookmarkManager.defaultColumnList=Y,await e.plugin.saveSettings()},[_,H]=d.useState(!1);d.useEffect(()=>()=>{const Y=TO(e.plugin.settings.bookmarkManager.category);i(Y),Y&&Il(e.plugin,{bookmarks:t,categories:Y})},[e.categories]);const j=Y=>{Y===void 0&&(Y=u);const Q=ke.prepareFuzzySearch(Y);if(Y==="")n(e.bookmarks);else{const te=e.bookmarks.filter(ce=>{var se,ne;return((se=Q(ce.name.toLocaleLowerCase()))==null?void 0:se.score)||((ne=Q(ce.description.toLocaleLowerCase()))==null?void 0:ne.score)});n(te)}p(Y)},L=Y=>{Y.key==="Escape"&&(n(e.bookmarks),p(""))},F=()=>{c(iu),H(!0)},U=async Y=>{const Q=[...r.current];n(Q.filter(te=>te.id!==Y.id)),await Il(e.plugin,{bookmarks:r.current,categories:e.categories}),fv(e.plugin,r.current,e.categories,!1)},D=async Y=>{n([...Y])},W=()=>{c(iu),H(!1)},G=()=>{c(iu),H(!1)},q=async(Y,Q)=>{e.bookmarks.some((ce,se)=>ce.url===Y.url||ce.id===Q?(t[se]=Y,n(t),H(!1),c(iu),!0):!1)||(t.unshift(Y),n(t),H(!1)),await Il(e.plugin,{bookmarks:t,categories:e.categories}),fv(e.plugin,t,e.categories,!1)},J={handleImportFinished:Y=>D(Y)};return it.jsx("div",{className:"surfing-bookmark-manager",children:it.jsxs(la,{theme:{algorithm:e.plugin.app.getTheme()==="obsidian"?CO.darkAlgorithm:CO.defaultAlgorithm},children:[it.jsx("div",{className:"surfing-bookmark-manager-header-bar",children:it.jsxs(CM,{gutter:[16,16],children:[it.jsx(iv,{span:12,children:it.jsxs("div",{className:"surfing-bookmark-manager-search-bar",children:[it.jsx(Fo,{value:u,onChange:Y=>{j(Y.target.value)},defaultValue:u,placeholder:` ${je("Search from ")} ${t.length} ${je(" bookmarks")} `,onPressEnter:Y=>{j(Y.currentTarget.value)},onKeyDown:L,allowClear:!0}),it.jsx(jr,{onClick:F,children:"+"}),it.jsx(DZ,{...J})]})}),it.jsx(iv,{span:7,style:{marginTop:"5px"},children:it.jsx(z,{options:jZ,value:T,onChange:B})})]})}),it.jsx(ca,{dataSource:r.current,columns:P,pagination:{defaultCurrent:1,current:v,defaultPageSize:Number(e.plugin.settings.bookmarkManager.pagination),position:["bottomCenter"],onChange:(Y,Q)=>{h(Y)}},scroll:{y:"100%",x:"fit-content"},sticky:!0,rowKey:"id",showSorterTooltip:!1,onChange:V},new Date().toISOString()),it.jsx(wi,{title:"Bookmark",keyboard:!0,open:_,onOk:W,onCancel:G,footer:[null],children:it.jsx(RZ,{bookmark:s,options:a,handleSaveBookmark:q,categories:o})},s.id)]})})}const Mi="surfing-bookmark-manager";class BZ extends ke.ItemView{constructor(n,r){super(n);Ce(this,"bookmarkData",[]);Ce(this,"categoryData",[]);Ce(this,"plugin");this.plugin=r}getViewType(){return Mi}getDisplayText(){return"Surfing Bookmark Manager"}getIcon(){return"album"}async onOpen(){try{const{bookmarks:n,categories:r}=await Ua(this.plugin);this.bookmarkData=n,this.categoryData=r}catch{if(this.bookmarkData.length===0){await k2(this.plugin);const{bookmarks:r,categories:o}=await Ua(this.plugin);this.bookmarkData=r,this.categoryData=o}}this.bookmarkData&&this.categoryData&&kv.createRoot(this.containerEl).render(it.jsx(ue.StrictMode,{children:it.jsx(LZ,{bookmarks:this.bookmarkData,categories:this.categoryData,plugin:this.plugin})}))}}class AN{constructor(t,n){Ce(this,"view");Ce(this,"plugin");Ce(this,"BookmarkBarEl");Ce(this,"BookmarkBarContainerEl");Ce(this,"bookmarkData",[]);Ce(this,"categoryData",[]);this.view=t,this.plugin=n}async onload(){var t;this.BookmarkBarEl=this.view.contentEl.createEl("div",{cls:"wb-bookmark-bar"}),this.renderIcon();try{const{bookmarks:n,categories:r}=await Ua(this.plugin);this.bookmarkData=n,this.categoryData=r}catch{if(((t=this.bookmarkData)==null?void 0:t.length)===0||!this.bookmarkData){await k2(this.plugin);const{bookmarks:r,categories:o}=await Ua(this.plugin);this.bookmarkData=r,this.categoryData=o}}this.render(this.bookmarkData,this.categoryData)}renderIcon(){const t=this.BookmarkBarEl.createEl("div",{cls:"wb-bookmark-manager-entry"}),n=t.createEl("div",{cls:"wb-bookmark-manager-icon"});t.onclick=async()=>{const r=this.plugin.app.workspace;r.detachLeavesOfType(Mi),await r.getLeaf(!1).setViewState({type:Mi}),r.revealLeaf(r.getLeavesOfType(Mi)[0])},ke.setIcon(n,"bookmark")}render(t,n){this.BookmarkBarContainerEl&&this.BookmarkBarContainerEl.detach();const r=n.shift();r&&n.push(r),this.BookmarkBarContainerEl=this.BookmarkBarEl.createEl("div",{cls:"wb-bookmark-bar-container"}),n==null||n.forEach(o=>{new hL(this.BookmarkBarContainerEl,this.plugin,this.view,o,t).onload()})}}const fv=(e,t,n,r)=>{if(r){const a=e.app.workspace.getLeavesOfType("surfing-bookmark-manager");a.length>0&&a[0].rebuildView()}const o=e.app.workspace.getLeavesOfType("empty");o.length>0&&o.forEach(a=>{a.rebuildView()});const i=e.app.workspace.getLeavesOfType("surfing-view");i.length>0&&i.forEach(a=>{var s,c;(c=(s=a.view)==null?void 0:s.bookmarkBar)==null||c.render(t,n)})};class Xb{constructor(t,n,r){Ce(this,"contentEl");Ce(this,"webviewEl");Ce(this,"node");Ce(this,"currentUrl");Ce(this,"plugin");this.contentEl=t.createEl("div",{cls:"wb-view-content"}),this.node=t,this.currentUrl=n,this.plugin=r}onload(){this.contentEl.empty(),this.appendWebView()}appendWebView(){const t=this.contentEl.doc;this.webviewEl=t.createElement("webview"),this.webviewEl.setAttribute("allowpopups",""),this.webviewEl.addClass("wb-frame");const n=this;this.currentUrl&&this.webviewEl.setAttribute("src",this.currentUrl),this.webviewEl.addEventListener("dom-ready",r=>{const o=Un.remote.webContents.fromId(this.webviewEl.getWebContentsId());o.setWindowOpenHandler(i=>{if(i.disposition!=="foreground-tab")return on.spawnWebBrowserView(n.plugin,!0,{url:i.url}),{action:"allow"}});try{const a=this.plugin.settings.highlightFormat,s=()=>{var p;let c="";const u=(p=a.match(/\{TIME\:[^\{\}\[\]]*\}/g))==null?void 0:p[0];if(u){const v=ke.moment().format(u.replace(/{TIME:([^\}]*)}/g,"$1"));return c=a.replace(u,v),c}return c};o.executeJavaScript(` - window.addEventListener('dragstart', (e) => { - if(e.ctrlKey || e.metaKey) { - e.dataTransfer.clearData(); - const selectionText = document.getSelection().toString(); - const linkToHighlight = e.srcElement.baseURI.replace(/#:~:text=(.*)/g, "") + "#:~:text=" + encodeURIComponent(selectionText); - let link = ""; - if ("${a}".includes("{TIME")) { - link = "${s()}"; - } - link = (link != "" ? link : "${a}").replace(/{URL}/g, linkToHighlight).replace(/{CONTENT}/g, selectionText.replace(/\\n/g, " ")); - - e.dataTransfer.setData('text/plain', link); - console.log(e); - } - }); - `,!0).then(c=>{})}catch{}}),this.webviewEl.addEventListener("did-navigate-in-page",r=>{if(r.url.contains("contacts.google.com/widget")){Un.remote.webContents.fromId(this.webviewEl.getWebContentsId()).stop();return}this.currentUrl=r.url}),this.webviewEl.addEventListener("destroyed",()=>{t!==this.contentEl.doc&&(this.webviewEl.detach(),this.appendWebView())}),t.contains(this.contentEl)?this.contentEl.appendChild(this.webviewEl):this.contentEl.onNodeInserted(()=>{this.contentEl.doc===t?this.contentEl.appendChild(this.webviewEl):this.appendWebView()})}}const Nr="surfing-view";class on extends ke.ItemView{constructor(n,r){super(n);Ce(this,"plugin");Ce(this,"searchBox");Ce(this,"currentUrl");Ce(this,"currentTitle","Surfing");Ce(this,"headerBar");Ce(this,"favicon");Ce(this,"webviewEl");Ce(this,"menu");Ce(this,"hoverPopover");Ce(this,"searchContainer");Ce(this,"bookmarkBar");Ce(this,"loaded",!1);Ce(this,"doc");Ce(this,"omnisearchEnabled");Ce(this,"createWebview",()=>{this.contentEl.empty(),this.plugin.settings.bookmarkManager.openBookMark&&(this.bookmarkBar=new AN(this.leaf.view,this.plugin),this.bookmarkBar.onload());const n=this.contentEl.doc;this.webviewEl=n.createElement("webview"),this.webviewEl.setAttribute("allowpopups",""),this.webviewEl.partition="persist:surfing-vault-"+this.app.appId,this.webviewEl.addClass("wb-frame"),this.contentEl.appendChild(this.webviewEl),this.currentUrl&&this.navigate(this.currentUrl),this.headerBar.addOnSearchBarEnterListener(r=>{this.navigate(r)}),this.webviewEl.addEventListener("dom-ready",async r=>{const o=Un.remote.webContents.fromId(this.webviewEl.getWebContentsId());o.setWindowOpenHandler(i=>(on.spawnWebBrowserView(this.plugin,!0,{url:i.url,active:i.disposition!=="background-tab"}),{action:"allow"})),await this.registerContextMenuInWebcontents(o),await this.registerJavascriptInWebcontents(o),o.on("before-input-event",(i,a)=>{if(a.type!=="keyDown")return;const s=new KeyboardEvent("keydown",{code:a.code,key:a.key,shiftKey:a.shift,altKey:a.alt,ctrlKey:a.control,metaKey:a.meta,repeat:a.isAutoRepeat});if(this.plugin.settings.focusSearchBarViaKeyboard&&s.key==="/"&&!this.plugin.settings.ignoreList.find(c=>this.currentUrl.contains(c.toLowerCase()))){o.executeJavaScript(` - document.activeElement instanceof HTMLInputElement - `,!0).then(c=>{c||this.headerBar.focus()});return}activeDocument.body.dispatchEvent(s),s.ctrlKey&&s.key==="f"&&(this.searchBox=new dL(this.leaf,o,this.plugin))});try{const i=this.plugin.settings.highlightFormat,a=()=>{var u;let s="";const c=(u=i.match(/\{TIME\:[^\{\}\[\]]*\}/g))==null?void 0:u[0];if(c){const p=ke.moment().format(c.replace(/{TIME:([^\}]*)}/g,"$1"));return s=i.replace(c,p),s}return s};o.executeJavaScript(` - window.addEventListener('dragstart', (e) => { - if(e.ctrlKey || e.metaKey) { - e.dataTransfer.clearData(); - const selectionText = document.getSelection().toString(); - - let tempText = encodeURIComponent(selectionText); - const chineseRegex = /[぀-ヿ㐀-䶿一-鿿豈-﫿ヲ-゚]/gi; - const englishSentence = selectionText.split('\\n'); - - if (selectionText.match(chineseRegex)?.length > 50) { - if (englishSentence.length > 1) { - const fistSentenceWords = englishSentence[0]; - const lastSentenceWords = englishSentence[englishSentence.length - 1]; - - tempText = encodeURIComponent(fistSentenceWords.slice(0, 4)) + "," + encodeURIComponent(lastSentenceWords.slice(lastSentenceWords.length - 4, lastSentenceWords.length)); - } else { - tempText = encodeURIComponent(selectionText.substring(0, 8)) + "," + encodeURIComponent(selectionText.substring(selectionText.length - 8, selectionText.length)); - } - } else if (englishSentence.length > 1) { - - const fistSentenceWords = englishSentence[0].split(' '); - const lastSentenceWords = englishSentence[englishSentence.length - 1].split(' '); - - tempText = encodeURIComponent(fistSentenceWords.slice(0, 3).join(' ')) + "," + encodeURIComponent(lastSentenceWords.slice(lastSentenceWords.length - 1, lastSentenceWords.length).join(' ')); - } - - const linkToHighlight = e.srcElement.baseURI.replace(/#:~:text=(.*)/g, "") + "#:~:text=" + tempText; - let link = ""; - if ("${i}".includes("{TIME")) { - link = "${a()}"; - } - link = (link != "" ? link : "${i}").replace(/{URL}/g, linkToHighlight).replace(/{CONTENT}/g, selectionText.replace(/\\n/g, " ")); - - e.dataTransfer.setData('text/plain', link); - } - }); - `,!0).then(s=>{})}catch{}}),this.webviewEl.addEventListener("focus",r=>{this.app.workspace.setActiveLeaf(this.leaf)}),this.webviewEl.addEventListener("page-favicon-updated",r=>{r.favicons[0]!==void 0&&(this.favicon.src=r.favicons[0]),this.leaf.tabHeaderInnerIconEl.empty(),this.leaf.tabHeaderInnerIconEl.appendChild(this.favicon)}),this.webviewEl.addEventListener("page-title-updated",r=>{this.omnisearchEnabled&&this.updateSearchBox(),this.leaf.tabHeaderInnerTitleEl.innerText=r.title,this.currentTitle=r.title,this.app.workspace.trigger("surfing:page-change",r.title,this)}),this.webviewEl.addEventListener("will-navigate",r=>{this.navigate(r.url,!0,!1)}),this.webviewEl.addEventListener("did-navigate-in-page",r=>{var o;this.navigate(r.url,!0,!1),(o=this.menu)==null||o.close()}),this.webviewEl.addEventListener("new-window",r=>{r.preventDefault()}),this.webviewEl.addEventListener("did-attach-webview",r=>{}),this.webviewEl.addEventListener("destroyed",()=>{n!==this.contentEl.doc&&(this.webviewEl.detach(),this.createWebview())}),n.contains(this.contentEl)?this.contentEl.appendChild(this.webviewEl):this.contentEl.onNodeInserted(()=>{this.loaded||(this.loaded=!0,this.contentEl.doc===n?this.contentEl.appendChild(this.webviewEl):this.createWebview())})});Ce(this,"createMenu",(n,r)=>{var s,c;this.menu&&((s=this.menu)==null||s.close());const o=this;this.menu=new ke.Menu;const i=()=>{var u;(u=this.leaf)==null||u.history.back()},a=()=>{var u;(u=this.leaf)==null||u.history.forward()};if(r.selectionText||this.menu.addItem(u=>{u.setTitle(je("Refresh Current Page")),u.setIcon("refresh-ccw"),u.onClick(()=>{var p;(p=this.leaf)==null||p.rebuildView()})}).addItem(u=>{u.setTitle(je("Back")),u.setIcon("arrow-left"),u.onClick(()=>{i()})}).addItem(u=>{u.setTitle(je("Forward")),u.setIcon("arrow-right"),u.onClick(()=>{a()})}).addSeparator(),this.menu.addItem(u=>{u.setTitle(je("Open Current URL In External Browser")),u.setIcon("link"),u.onClick(()=>{window.open(r.pageURL,"_blank")})}).addItem(u=>{u.setTitle(je("Save Current Page As Markdown")),u.setIcon("download"),u.onClick(async()=>{try{n.executeJavaScript(` - document.body.outerHTML - `,!0).then(async p=>{const v=r.pageURL.replace(/\?(.*)/g,""),h=p.replaceAll(/src="(?!(https|http))([^"]*)"/g,'src="'+v+'$2"'),m=ke.htmlToMarkdown(h),b=n.getTitle().replace(/[/\\?%*:|"<>]/g,"-"),y=await o.app.vault.create((o.plugin.settings.markdownPath?o.plugin.settings.markdownPath+"/":"/")+b+".md",m);await o.app.workspace.openLinkText(y.path,"",!0)})}catch{}})}).addItem(u=>{u.setTitle(je("Copy Current Viewport As Image")),u.setIcon("image"),u.onClick(async()=>{try{n.capturePage().then(async p=>{Un.clipboard.writeImage(p)})}catch{}})}),r.selectionText){this.menu.addSeparator(),this.menu.addItem(p=>{p.setTitle(je("Search Text")),p.setIcon("search"),p.onClick(()=>{try{on.spawnWebBrowserView(o.plugin,!0,{url:r.selectionText})}catch{}})}),this.menu.addSeparator(),this.menu.addItem(p=>{p.setTitle(je("Copy Plain Text")),p.setIcon("copy"),p.onClick(()=>{try{navigator.clipboard.writeText(r.selectionText)}catch{}})}),this.menu.addItem(p=>{p.setTitle("Save selection as markdown").setIcon("download").onClick(async()=>{const v=r.selectionText,h=n.getTitle().replace(/[/\\?%*:|"<>]/g,"-"),m=await o.app.vault.create((o.plugin.settings.markdownPath?o.plugin.settings.markdownPath+"/":"/")+h+".md",v);await o.app.workspace.openLinkText(m.path,"",!0)})});const u=this.plugin.settings.highlightFormat;this.menu.addItem(p=>{p.setTitle(je("Copy Link to Highlight")),p.setIcon("link"),p.onClick(()=>{var v,h;try{let m=encodeURIComponent(r.selectionText);const b=/[\u3040-\u30ff\u3400-\u4dbf\u4e00-\u9fff\uf900-\ufaff\uff66-\uff9f]/gi,y=r.selectionText.split(` -`);if(((v=r.selectionText.match(b))==null?void 0:v.length)>50)if(y.length>1){const E=y[0],k=y[y.length-1];m=encodeURIComponent(E.slice(0,3))+","+encodeURIComponent(k.slice(k.length-4,k.length))}else m=encodeURIComponent(r.selectionText.substring(0,8))+","+encodeURIComponent(r.selectionText.substring(r.selectionText.length-8,r.selectionText.length));else if(y.length>1){const E=y[0].split(" "),k=y[y.length-1].split(" ");m=encodeURIComponent(E.slice(0,3).join(" "))+","+encodeURIComponent(k.slice(k.length-1,k.length).join(" "))}const w=r.pageURL.replace(/\#\:\~\:text\=(.*)/g,"")+"#:~:text="+m,C=r.selectionText.replace(/\n/g," ");let S="";if(u.contains("{TIME")){const E=(h=u.match(/\{TIME\:[^\{\}\[\]]*\}/g))==null?void 0:h[0];if(E){const k=ke.moment().format(E.replace(/{TIME:([^\}]*)}/g,"$1"));S=u.replace(E,k)}}S=(S!=""?S:u).replace(/\{URL\}/g,w).replace(/\{CONTENT\}/g,C),Un.clipboard.writeText(S)}catch{}})})}(c=r.pageURL)!=null&&c.contains("bilibili.com/")&&(this.menu.addSeparator(),this.menu.addItem(u=>{u.setTitle(je("Copy Video Timestamp")),u.setIcon("link"),u.onClick(()=>{try{n.executeJavaScript(` - var time = document.querySelectorAll('.bpx-player-ctrl-time-current')[0].innerHTML; - var timeYMSArr=time.split(':'); - var joinTimeStr='00h00m00s'; - if(timeYMSArr.length===3){ - joinTimeStr=timeYMSArr[0]+'h'+timeYMSArr[1]+'m'+timeYMSArr[2]+'s'; - }else if(timeYMSArr.length===2){ - joinTimeStr=timeYMSArr[0]+'m'+timeYMSArr[1]+'s'; - } - var timeStr= ""; - var pageStrMatch = window.location.href.match(/(p=[1-9]{1,})/g); - var pageStr = ""; - if(typeof pageStrMatch === "object" && pageStrMatch?.length > 0){ - pageStr = '&' + pageStrMatch[0]; - }else if(typeof pageStrMatch === "string") { - pageStr = '&' + pageStrMatch; - } - timeStr = window.location.href.split('?')[0]+'?t=' + joinTimeStr + pageStr; - `,!0).then(p=>{Un.clipboard.writeText("["+p.split("?t=")[1].replace(/&p=[1-9]{1,}/g,"")+"]("+p+")")})}catch{}})})),this.menu.showAtPosition({x:r.x,y:r.y})});this.plugin=r,this.omnisearchEnabled=!1}static spawnWebBrowserView(n,r,o){var p,v,h,m,b,y,w,C,S,E,k,O,$;const i=n.settings,a=i.openInSameTab,s=i.highlightInSameTab,c=n.app;if(!a||o.url.startsWith("file://")){if(o.url.contains("bilibili"))for(let T=0;T{app.setting.open(),app.setting.openTabById("surfing")}),this.addAction("star",je("star"),async()=>{const n=await Ua(this.plugin),r=n.bookmarks;try{if(r.some(i=>i.url===this.currentUrl))new ke.Notice("Bookmark already exists.");else{const i=Un.remote.webContents.fromId(this.webviewEl.getWebContentsId());let a="";try{i.executeJavaScript(` - document.querySelector('meta[name="description"]')?.content - `).then(c=>{c&&(a=c)})}catch{}const s=this.plugin.settings.bookmarkManager.defaultCategory.split(",").map(c=>c.trim());r.unshift({id:String(Dd(this.currentUrl)),name:this.currentTitle,url:this.currentUrl,description:a,category:s.length>0?s:["ROOT"],tags:"",created:ke.moment().valueOf(),modified:ke.moment().valueOf()}),await Il(this.plugin,{bookmarks:r,categories:n.categories}),fv(this.plugin,r,n.categories,!0)}}catch{new ke.Notice("Failed to add bookmark.")}}),this.plugin.settings.bookmarkManager.sendToReadWise&&this.addAction("book",je("Send to ReadWise"),async()=>{const n=(r,o)=>{open("https://readwise.io/save?title="+encodeURIComponent(r)+"&url="+encodeURIComponent(o))};try{await n(this.currentTitle,this.currentUrl),new ke.Notice("Save success!")}catch{new ke.Notice("Save failed!")}})}async setState(n,r){this.navigate(n.url,!1)}updateSearchBox(){var s,c;const n=[...Ri,...this.plugin.settings.customSearchEngine],r=/^(?:https?:\/\/)?(?:[^@/\n]+@)?(?:www\.)?([^:/?\n]+)/g,o=(c=(s=this.currentUrl)==null?void 0:s.match(r))==null?void 0:c[0];if(!o||!n.find(u=>u.url.startsWith(o)))return;const a=Un.remote.webContents.fromId(this.webviewEl.getWebContentsId());try{a.executeJavaScript(` - document.querySelector('input')?.value - `,!0).then(u=>{this.searchContainer.update(u==null?void 0:u.toLowerCase())})}catch{}}async registerContextMenuInWebcontents(n){n.executeJavaScript(` - window.addEventListener('contextmenu', (e) => { - e.preventDefault(); - e.stopPropagation(); - e.stopImmediatePropagation(); - window.myPostPort?.postMessage('contextmenu ' + e.clientX + ' ' + e.clientY); - }) - `),await n.executeJavaScript(` - window.addEventListener("message", (e) => { - window.myPostPort = e.ports[0]; - }) - document.addEventListener('click', (e) => { - window.myPostPort?.postMessage('click'); - }); - document.addEventListener('scroll', (e) => { - window.myPostPort?.postMessage('scroll'); - });0 - - `);const r=new MessageChannel;r.port1.onmessage=o=>{var i,a,s,c;if(o.data==="contextmenu"||(i=o.data)!=null&&i.startsWith("contextmenu")){(a=this.menu)==null||a.close();const{x:u,y:p}=o.data.split(" ").length>1?{x:o.data.split(" ")[1],y:o.data.split(" ")[2]}:{x:o.x,y:o.y},v=this.webviewEl.getClientRects(),h={x:parseInt(u,10)+v[0].x,y:parseInt(p,10)+v[0].y},m=this.currentUrl;let b="";try{n.executeJavaScript("window.getSelection().toString()",!0).then(y=>{b=y,this.createMenu(n,{...h,pageURL:m,selectionText:b})})}catch{}return}if(o.data&&o.data.startsWith("link ")){this.hoverPopover&&this.hoverPopover.hide();const u=o.data.split(" ")[1],p=o.data.split(" ")[2],v=o.data.split(" ")[3];if(!v||!v.startsWith("http"))return;this.hoverPopover=new ke.HoverPopover(this.contentEl,null,100);const h=this.webviewEl.getClientRects(),m={x:parseInt(u,10)+h[0].x,y:parseInt(p,10)+h[0].y};setTimeout(()=>{this.hoverPopover.position({x:m.x,y:m.y,doc:this.doc})},100),this.hoverPopover.hoverEl.toggleClass("surfing-hover-popover",!0);const b=this.hoverPopover.hoverEl.createEl("div",{cls:"surfing-hover-popover-container"});new Xb(b,v,this.plugin).onload();return}o.data!=="darkreader-failed"?((s=this.menu)==null||s.close(),(c=this.hoverPopover)==null||c.hide()):o.data==="darkreader-failed"&&n.executeJavaScript(` - window.getComputedStyle( document.body ,null).getPropertyValue('background-color'); - `,!0).then(u=>{const p=u.slice(u.indexOf("(")+1,u.indexOf(")")).split(", ");Math.sqrt(p[0]**2*.241+p[1]**2*.691+p[2]**2*.068)>120&&n.insertCSS(` - html { - filter: invert(90%) hue-rotate(180deg); - } - - img, svg, div[class*="language-"] { - filter: invert(110%) hue-rotate(180deg); - opacity: .8; - } - - video, canvas { - filter: invert(110%) hue-rotate(180deg); - opacity: 1; - } - `)})},await this.webviewEl.contentWindow.postMessage("test","*",[r.port2])}async registerJavascriptInWebcontents(n){try{if(this.plugin.settings.darkMode)try{await n.executeJavaScript(` - const element = document.createElement('script'); - - fetch('https://cdn.jsdelivr.net/npm/darkreader/darkreader.min.js') - .then((response) => { - element.src = response.url; - document.body.appendChild(element); - }) - .catch((error) => { - console.error('Error loading the script:', error); - }); - - element.onload = () => { - try { - DarkReader?.setFetchMethod(window.fetch); - DarkReader?.enable({ - brightness: 100, - contrast: 90, - sepia: 10 - }); - console.log(DarkReader); - } catch (err) { - - window.myPostPort?.postMessage('darkreader-failed'); - console.error('Failed to load dark reader: ', err); - - } - };0 - `)}catch{}}catch{}n.executeJavaScript(` - window.addEventListener('mouseover', (e) => { - if(!e.target) return; - if(!e.ctrlKey && !e.metaKey) return; - // Tag name is a tag - if(e.target.tagName.toLowerCase() === 'a'){ - window.myPostPort?.postMessage('link ' + e.clientX + ' ' + e.clientY + ' ' + e.target.href); - } - }); - `)}clearHistory(){const n=Un.remote.webContents.fromId(this.webviewEl.getWebContentsId());n&&(n.clearHistory(),n.executeJavaScript("history.pushState({}, '', location.href)"),this.leaf.history.backHistory.splice(0),this.leaf.history.forwardHistory.splice(0))}getState(){return{url:this.currentUrl}}getCurrentTitle(){return this.currentTitle}navigate(n,r=!0,o=!0){var s,c,u,p;if(n==="")return;r&&((u=(c=(s=this.leaf.history.backHistory.last())==null?void 0:s.state)==null?void 0:c.state)==null?void 0:u.url)!==this.currentUrl&&(this.leaf.history.backHistory.push({state:{type:Nr,state:this.getState()},title:this.currentTitle,icon:"search"}),this.headerEl.children[1].children[0].setAttribute("aria-disabled","false"));const i=/^(https?:\/\/)?(www\.)?[-a-zA-Z0-9@:%._\+~#?&//=]{1,256}\.[a-zA-Z0-9()]{1,6}\b([-a-zA-Z0-9()@:%_\+.~#?&//=]*)$/g,a=/((([A-Za-z]{3,9}:(?:\/\/)?)(?:[-;:&=\+\$,\w]+@)?[A-Za-z0-9.-]+(:[0-9]+)?|(?:www.|[-;:&=\+\$,\w]+@)[A-Za-z0-9.-]+)((?:\/[\+~%\/.\w\-_]*)?\??(?:[-\+=&;%@.\w_]*)#?(?:[\w]*))?)/g;if(i.test(n)){const v=n.slice(0,7).toLowerCase(),h=n.slice(0,8).toLowerCase();v==="http://"||v==="file://"||h==="https://"||(n="https://"+n)}else if(!(n.startsWith("file://")||/\.htm(l)?/g.test(n))&&!a.test(encodeURI(n))||!/^(https?|file):\/\//g.test(n)){const h=[...Ri,...this.plugin.settings.customSearchEngine].find(m=>m.name.toLowerCase()===this.plugin.settings.defaultSearchEngine);n=(h?h.url:Ri[0].url)+n}this.currentUrl=n,this.headerBar.setSearchBarUrl(n),o&&this.webviewEl.setAttribute("src",n),(p=this.searchBox)==null||p.unload(),this.app.workspace.requestSaveLayout()}getCurrentTimestamp(n){Un.remote.webContents.fromId(this.webviewEl.getWebContentsId()).executeJavaScript(` - var time = document.querySelectorAll('.bpx-player-ctrl-time-current')[0].innerHTML; - var timeYMSArr=time.split(':'); - var joinTimeStr='00h00m00s'; - if(timeYMSArr.length===3){ - joinTimeStr=timeYMSArr[0]+'h'+timeYMSArr[1]+'m'+timeYMSArr[2]+'s'; - }else if(timeYMSArr.length===2){ - joinTimeStr=timeYMSArr[0]+'m'+timeYMSArr[1]+'s'; - } - var timeStr= ""; - timeStr = window.location.href.split('?')[0]+'?t=' + joinTimeStr; - `,!0).then(o=>{const i="["+o.split("?t=")[1]+"]("+o+") ",a=n==null?void 0:n.posToOffset(n==null?void 0:n.getCursor());n==null||n.replaceRange(i,n==null?void 0:n.getCursor()),a&&(n==null||n.setCursor(n==null?void 0:n.offsetToPos(a+i.length)))})}refresh(){Un.remote.webContents.fromId(this.webviewEl.getWebContentsId()).reload()}copyHighLight(){const n=this.plugin.settings.highlightFormat,r=()=>{var u;let s="";const c=(u=n.match(/\{TIME\:[^\{\}\[\]]*\}/g))==null?void 0:u[0];if(c){const p=ke.moment().format(c.replace(/{TIME:([^\}]*)}/g,"$1"));return s=n.replace(c,p),s}return s},o=()=>this.currentUrl,i=()=>n.includes("{TIME");Un.remote.webContents.fromId(this.webviewEl.getWebContentsId()).executeJavaScript(` - const selectionText = document.getSelection().toString(); - let tempText = encodeURIComponent(selectionText); - const chineseRegex = /[぀-ヿ㐀-䶿一-鿿豈-﫿ヲ-゚]/gi; - const englishSentence = selectionText.split('\\n'); - - if (selectionText.match(chineseRegex)?.length > 50) { - if (englishSentence.length > 1) { - const fistSentenceWords = englishSentence[0]; - const lastSentenceWords = englishSentence[englishSentence.length - 1]; - - tempText = encodeURIComponent(fistSentenceWords.slice(0, 4)) + "," + encodeURIComponent(lastSentenceWords.slice(lastSentenceWords.length - 4, lastSentenceWords.length)); - } else { - tempText = encodeURIComponent(selectionText.substring(0, 8)) + "," + encodeURIComponent(selectionText.substring(selectionText.length - 8, selectionText.length)); - } - } else if (englishSentence.length > 1) { - - const fistSentenceWords = englishSentence[0].split(' '); - const lastSentenceWords = englishSentence[englishSentence.length - 1].split(' '); - - tempText = encodeURIComponent(fistSentenceWords.slice(0, 3).join(' ')) + "," + encodeURIComponent(lastSentenceWords.slice(lastSentenceWords.length - 1, lastSentenceWords.length).join(' ')); - } - - let linkToHighlight = "${o()}".replace(/#:~:text=(.*)/g, "") + "#:~:text=" + tempText; - - let link = ""; - if (${i()}) { - link = "${r()}"; - } - link = (link != "" ? link : "${n}").replace(/{URL}/g, linkToHighlight).replace(/{CONTENT}/g, selectionText.replace(/\\n/g, " ")); - - `,!0).then(s=>{Un.clipboard.writeText(s)})}}const sp=e=>e.startsWith("http://")||e.startsWith("https://")||e.startsWith("file://")&&/\.htm(l)?/g.test(e),s1=(e,t)=>{const n=/^(https?:\/\/)?(www\.)?[-a-zA-Z0-9@:%._\+~#?&//=]{1,256}\.[a-zA-Z0-9()]{1,6}\b([-a-zA-Z0-9()@:%_\+.~#?&//=]*)$/g,r=/((([A-Za-z]{3,9}:(?:\/\/)?)(?:[-;:&=\+\$,\w]+@)?[A-Za-z0-9.-]+(:[0-9]+)?|(?:www.|[-;:&=\+\$,\w]+@)[A-Za-z0-9.-]+)((?:\/[\+~%\/.\w\-_]*)?\??(?:[-\+=&;%@.\w_]*)#?(?:[\w]*))?)/g;let o=t;if(n.test(o)){const i=o.slice(0,7).toLowerCase(),a=o.slice(0,8).toLowerCase();i==="http://"||i==="file://"||a==="https://"||(o="https://"+o)}else!(o.startsWith("file://")||/\.htm(l)?/g.test(o))&&!r.test(encodeURI(o))&&(o=e+o);return/^(https?|file):\/\//g.test(o)||(o=e+o),o},AZ=(e,t)=>{let n=e;if(!n)return;const r=t.settings,o=/^(https?:\/\/)?(www\.)?[-a-zA-Z0-9@:%._\+~#?&//=]{1,256}\.[a-zA-Z0-9()]{1,6}\b([-a-zA-Z0-9()@:%_\+.~#?&//=]*)$/g,i=/((([A-Za-z]{3,9}:(?:\/\/)?)(?:[-;:&=\+\$,\w]+@)?[A-Za-z0-9.-]+(:[0-9]+)?|(?:www.|[-;:&=\+\$,\w]+@)[A-Za-z0-9.-]+)((?:\/[\+~%\/.\w\-_]*)?\??(?:[-\+=&;%@.\w_]*)#?(?:[\w]*))?)/g;if(o.test(n)){const a=n.slice(0,7).toLowerCase(),s=n.slice(0,8).toLowerCase();a==="http://"||a==="file://"||s==="https://"||(n="https://"+n)}else if(!(n.startsWith("file://")||/\.htm(l)?/g.test(n))&&!i.test(encodeURI(n))||!/^(https?|file):\/\//g.test(n)){const s=[...Ri,...r.customSearchEngine].find(c=>c.name===r.defaultSearchEngine);n=(s?s.url:Ri[0].url)+n}return n||e};function PO(e){if(!e||e.contains(" "))return!1;try{new URL(e)}catch{return!1}return!0}const zZ=/^(([^<>()[\]\\.,;:\s@\"`]+(\.[^<>()[\]\\.,;:\s@\"]+)*)|(\".+\"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))\b/;function MO(e){return zZ.test(e)}class HZ extends Ry{constructor(n,r,o){super(n,o);Ce(this,"plugin");Ce(this,"bookmarkData",[]);Ce(this,"suggestions",[]);this.app=n,this.inputEl=o,this.plugin=r}getSuggestions(n){const r=n.toLowerCase();try{this.suggestions.length===0&&Ua(this.plugin).then(i=>{this.bookmarkData=i.bookmarks,this.suggestions=this.bookmarkData})}catch{}if(this.suggestions.length===0)return[];const o=this.suggestions.filter(i=>{if(i.url.toLowerCase().contains(r)||i.name.toLowerCase().contains(r))return i});return o||this.close(),(o==null?void 0:o.length)>0?(o.unshift({id:"BOOKMARK",name:r,description:"",url:"",tags:"",category:[],created:1111111111111,modified:1111111111111}),o):o||[]}renderSuggestion(n,r){const o=r.createEl("div",{cls:"wb-bookmark-suggest-container"});o.createEl("div",{text:n.name,cls:"wb-bookmark-suggestion-text"}),o.createEl("div",{text:n.url,cls:"wb-bookmark-suggestion-url"}),r.classList.add("wb-bookmark-suggest-item")}selectSuggestion(n){if(n){if(n.id==="BOOKMARK"){const r=s1("",n.name);on.spawnWebBrowserView(this.plugin,!1,{url:r}),this.close();return}on.spawnWebBrowserView(this.plugin,!1,{url:n.url}),this.close()}}}class FZ extends Ry{constructor(n,r,o,i){super(n,o);Ce(this,"plugin");Ce(this,"files");Ce(this,"view");this.app=n,this.inputEl=o,this.plugin=r,this.view=i,this.files=this.app.vault.getFiles()}fuzzySearchItemsOptimized(n,r){const o=ke.prepareFuzzySearch(n);return r.map(i=>{const a=o(i);return a?{item:i,match:a}:null}).filter(Boolean)}getSuggestions(n){const r=this.files.map(i=>i.path),o=this.fuzzySearchItemsOptimized(n.slice(1),r).sort((i,a)=>a.match.score-i.match.score).map(i=>({path:i.item,type:"file"}));return o.unshift({path:n,type:"web"}),o}renderSuggestion(n,r){r.createEl("div",{cls:"wb-bookmark-suggest-container"}).createEl("div",{text:n.path,cls:"wb-bookmark-suggestion-text"}),r.classList.add("wb-bookmark-suggest-item")}async selectSuggestion(n){if(n){switch(n.type){case"web":{const r=s1("",n.path);on.spawnWebBrowserView(this.plugin,!1,{url:r});break}case"file":{const r=this.files.find(o=>o.path===n.path);r&&await this.view.leaf.openFile(r);break}}this.close()}}}class Gb extends ke.Component{constructor(n,r,o,i){super();Ce(this,"plugin");Ce(this,"searchBar");Ce(this,"onSearchBarEnterListener",new Array);Ce(this,"view");Ce(this,"parentEl");Ce(this,"removeHeaderChild",!0);this.plugin=r,this.view=o,this.parentEl=n,i!==void 0&&(this.removeHeaderChild=i)}onLoad(){if(this.parentEl.addClass("wb-header-bar"),this.removeHeaderChild&&this.parentEl.empty(),this.initScope(),this.plugin.settings.showRefreshButton&&this.removeHeaderChild&&this.view.getViewType()!=="empty"){const n=this.parentEl.createEl("div",{cls:"wb-refresh-button"});n.addEventListener("click",()=>{this.view.leaf.rebuildView()}),ke.setIcon(n,"refresh-cw")}this.searchBar=this.parentEl.createEl("input",{type:"text",placeholder:je("Search with")+this.plugin.settings.defaultSearchEngine+je("or enter address"),cls:"wb-search-bar"}),this.registerDomEvent(this.searchBar,"keydown",n=>{if(n.key==="Enter")for(const r of this.onSearchBarEnterListener)r(this.searchBar.value)}),this.plugin.settings.bookmarkManager.openBookMark||new FZ(this.plugin.app,this.plugin,this.searchBar,this.view),this.plugin.settings.bookmarkManager.openBookMark&&new HZ(this.plugin.app,this.plugin,this.searchBar),this.registerDomEvent(this.searchBar,"focusin",n=>{this.searchBar.select()}),this.registerDomEvent(this.searchBar,"focusout",n=>{var r;(r=window.getSelection())==null||r.removeAllRanges(),this.removeHeaderChild||this.searchBar.detach()})}initScope(){this.view.scope?this.view.scope.register([],"/",n=>{this.plugin.settings.focusSearchBarViaKeyboard&&n.target!==this.searchBar&&(n.preventDefault(),this.searchBar.focus())}):(this.view.scope=new ke.Scope(this.plugin.app.scope),this.view.scope.register([],"/",n=>{this.plugin.settings.focusSearchBarViaKeyboard&&n.target!==this.searchBar&&(n.preventDefault(),this.searchBar.focus())}))}addOnSearchBarEnterListener(n){this.onSearchBarEnterListener.push(n)}setSearchBarUrl(n){this.searchBar.value=n}focus(){this.searchBar.focus()}}const Yb=["html","htm"],Qb="surfing-file-view";class _Z extends ke.FileView{constructor(n,r){super(n);Ce(this,"allowNoFile");Ce(this,"plugin");this.allowNoFile=!1,this.plugin=r}async onLoadFile(n){const o="file:///"+(this.app.vault.adapter.getBasePath()+"/"+n.path).toString().replace(/\s/g,"%20");on.spawnWebBrowserView(this.plugin,!0,{url:o}),this.leaf&&this.leaf.detach()}onunload(){}canAcceptExtension(n){return Yb.includes(n)}getViewType(){return Qb}}function Yi(e,t){const n=Object.keys(t).map(r=>VZ(e,r,t[r]));return n.length===1?n[0]:function(){n.forEach(r=>r())}}function VZ(e,t,n){const r=e[t],o=e.hasOwnProperty(t);let i=n(r);return r&&Object.setPrototypeOf(i,r),Object.setPrototypeOf(a,i),e[t]=a,s;function a(...c){return i===r&&e[t]===a&&s(),i.apply(this,c)}function s(){e[t]===a&&(o?e[t]=r:delete e[t]),i!==r&&(i=r,Object.setPrototypeOf(a,r||Function))}}class WZ extends Ry{constructor(n,r,o,i){super(n,o);Ce(this,"searchEngines");Ce(this,"searchEnginesString",[]);Ce(this,"plugin");Ce(this,"mode","web");Ce(this,"files",[]);Ce(this,"view");this.app=n,this.inputEl=o,this.plugin=r,this.files=this.app.vault.getFiles(),this.view=i}fuzzySearchItemsOptimized(n,r){const o=ke.prepareFuzzySearch(n);return r.map(i=>{const a=o(i);return a?{item:i,match:a}:null}).filter(Boolean)}getSuggestions(n){if(n.trim().startsWith("/")){this.mode="file";const o=this.files.map(a=>a.path);return this.fuzzySearchItemsOptimized(n.slice(1),o).sort((a,s)=>s.match.score-a.match.score).map(a=>a.item)}this.mode="web",this.searchEnginesString=[];const r=this.plugin.settings.defaultSearchEngine;return this.searchEngines=[...Ri,...this.plugin.settings.customSearchEngine].sort(function(o,i){return o.name.toLowerCase()==r.toLowerCase()?-1:i.name.toLowerCase()==r.toLowerCase()?1:0}),this.searchEngines.forEach(o=>{this.searchEnginesString.push(o.name)}),this.searchEnginesString}renderSuggestion(n,r){switch(this.mode){case"web":r.createEl("div",{text:je("Search with")+n,cls:"wb-search-suggestion-text"}),r.classList.add("wb-search-suggest-item");break;case"file":r.createEl("div",{text:"Open "+n,cls:"wb-search-suggestion-text"}),r.classList.add("wb-search-suggest-item");break}}async selectSuggestion(n){const r=this.inputEl.value;if(r.trim()!=="")switch(this.mode){case"web":{const o=this.searchEngines.find(s=>s.name===n),i=o?o.url:Ri[0].url,a=s1(i,r);on.spawnWebBrowserView(this.plugin,!1,{url:a});break}case"file":{const o=this.files.find(i=>i.path===n);o&&await this.view.leaf.openFile(o);break}}}}class UZ extends ke.Component{constructor(n,r,o){super();Ce(this,"plugin");Ce(this,"inPageSearchBarInputEl");Ce(this,"SearchBarInputContainerEl");Ce(this,"inPageSearchBarContainerEl");Ce(this,"onSearchBarEnterListener",new Array);Ce(this,"searchEnginesSuggester");Ce(this,"view");this.plugin=o,this.view=r,this.inPageSearchBarContainerEl=n.createEl("div",{cls:"wb-page-search-bar-container"}),this.initScope(),this.inPageSearchBarContainerEl.createEl("div",{text:"Surfing",cls:"wb-page-search-bar-text"}),this.SearchBarInputContainerEl=this.inPageSearchBarContainerEl.createEl("div",{cls:"wb-page-search-bar-input-container"}),this.inPageSearchBarInputEl=this.SearchBarInputContainerEl.createEl("input",{type:"text",placeholder:je("Search with")+this.plugin.settings.defaultSearchEngine+je("or enter address"),cls:"wb-page-search-bar-input"}),this.registerDomEvent(this.inPageSearchBarInputEl,"keydown",i=>{if(i.key==="Enter")for(const a of this.onSearchBarEnterListener)a(this.inPageSearchBarInputEl.value)}),this.registerDomEvent(this.inPageSearchBarInputEl,"focusin",i=>{this.inPageSearchBarInputEl.select()}),this.registerDomEvent(this.inPageSearchBarInputEl,"focusout",i=>{var a;(a=window.getSelection())==null||a.removeAllRanges()}),this.plugin.settings.showOtherSearchEngines&&(this.searchEnginesSuggester=new WZ(this.plugin.app,this.plugin,this.inPageSearchBarInputEl,this.view))}addOnSearchBarEnterListener(n){this.onSearchBarEnterListener.push(n)}initScope(){this.plugin.settings.focusSearchBarViaKeyboard&&(this.view.scope?this.view.scope.register([],"i",n=>{n.target!==this.inPageSearchBarInputEl&&(n.preventDefault(),this.inPageSearchBarInputEl.focus())}):(this.view.scope=new ke.Scope(this.plugin.app.scope),this.view.scope.register([],"i",n=>{n.target!==this.inPageSearchBarInputEl&&(n.preventDefault(),this.inPageSearchBarInputEl.focus())})))}focus(){this.inPageSearchBarInputEl.focus()}}class KZ{constructor(t,n,r){Ce(this,"plugin");Ce(this,"view");Ce(this,"closeBtnEl");Ce(this,"searchBtnEl");Ce(this,"createBtnEl");Ce(this,"iconListEl");Ce(this,"searchBtn");Ce(this,"createBtn");Ce(this,"closeBtn");this.plugin=r,this.view=n,this.iconListEl=t.createEl("div",{cls:"wb-icon-list-container"}),this.createBtnEl=this.iconListEl.createEl("div",{cls:"wb-create-btn"}),this.searchBtnEl=this.iconListEl.createEl("div",{cls:"wb-search-btn"}),this.closeBtnEl=this.iconListEl.createEl("div",{cls:"wb-close-btn"}),this.closeBtn=new ke.ButtonComponent(this.closeBtnEl),this.createBtn=new ke.ButtonComponent(this.createBtnEl),this.searchBtn=new ke.ButtonComponent(this.searchBtnEl),this.createBtn.setIcon("file-plus").onClick(()=>{this.plugin.app.commands.executeCommandById("file-explorer:new-file")}),this.searchBtn.setIcon("file-search-2").onClick(()=>{this.plugin.app.commands.executeCommandById("switcher:open")}),this.closeBtn.setIcon("x-square").onClick(()=>{var o,i;(o=this.view)!=null&&o.leaf&&((i=this.view)==null||i.leaf.detach())}),this.closeBtn.setTooltip(je("Close Current Leaf")),this.createBtn.setTooltip(je("Create A New Note")),this.searchBtn.setTooltip(je("Open Quick Switcher"))}onunload(){this.searchBtn.buttonEl.detach(),this.createBtn.buttonEl.detach(),this.closeBtn.buttonEl.detach()}}class qZ extends ke.Component{constructor(n,r,o){super();Ce(this,"node");Ce(this,"url");Ce(this,"searchBar");Ce(this,"onSearchBarEnterListener",new Array);Ce(this,"app");this.node=r,this.url=o,this.app=n}onload(){var r;const n=this.app.plugins.getPlugin("surfing").settings;this.searchBar=(r=this.node)==null?void 0:r.contentEl.createEl("input",{type:"text",placeholder:je("Search with")+n.defaultSearchEngine+je("or enter address"),cls:"wb-search-bar"}),this.registerDomEvent(this.searchBar,"keydown",o=>{if(o.key==="Enter")for(const i of this.onSearchBarEnterListener)i(this.searchBar.value)}),this.registerDomEvent(this.searchBar,"focusin",o=>{this.searchBar.select()}),this.registerDomEvent(this.searchBar,"focusout",o=>{var i;(i=window.getSelection())==null||i.removeAllRanges()})}addOnSearchBarEnterListener(n){this.onSearchBarEnterListener.push(n)}setSearchBarUrl(n){this.searchBar.value=n}focus(){this.searchBar.focus()}}class XZ{constructor(t,n,r,o){Ce(this,"contentEl");Ce(this,"webviewEl");Ce(this,"canvas");Ce(this,"node");Ce(this,"searchBarEl");Ce(this,"currentUrl");Ce(this,"plugin");Ce(this,"type");Ce(this,"editor");Ce(this,"widget");var i,a;this.contentEl=t.contentEl,this.node=t,this.canvas=o,this.type==="inline"&&(this.editor=(i=this.node)==null?void 0:i.editor,this.widget=(a=this.node)==null?void 0:a.widget),this.plugin=n,this.type=r}onload(){this.contentEl.empty(),this.type==="canvas"&&this.appendSearchBar(),this.type==="inline"&&this.appendShowOriginalCode(),this.appendWebView(),this.contentEl.toggleClass("wb-view-content",!0),this.type==="inline"&&this.contentEl.toggleClass("wb-browser-inline",!0)}appendShowOriginalCode(){const t=this.contentEl.createEl("div");t.addClass("wb-show-original-code"),new ke.ExtraButtonComponent(t).setIcon("code-2").setTooltip(je("Show original url"))}appendSearchBar(){this.searchBarEl=new qZ(this.plugin.app,this.node,this.node.url),this.searchBarEl.onload(),this.currentUrl=this.node.url,this.searchBarEl.setSearchBarUrl(this.node.url),this.searchBarEl.addOnSearchBarEnterListener(t=>{var o;const n=AZ(t,this.plugin);n?this.currentUrl=n:this.currentUrl=t,this.webviewEl.setAttribute("src",this.currentUrl),this.searchBarEl.setSearchBarUrl(this.currentUrl);const r=this.node.getData();r.url!==this.currentUrl&&(r.url=this.currentUrl,this.node.setData(r),(o=this.node.canvas)==null||o.requestSave(),this.node.render())})}appendWebView(){const t=this.contentEl.doc;this.webviewEl=t.createElement("webview"),this.webviewEl.setAttribute("allowpopups",""),this.webviewEl.addClass("wb-frame");const n=this;this.currentUrl?this.webviewEl.setAttribute("src",this.currentUrl):this.webviewEl.setAttribute("src",this.node.url),this.webviewEl.addEventListener("dom-ready",r=>{const o=Un.remote.webContents.fromId(this.webviewEl.getWebContentsId());o.setWindowOpenHandler(i=>{if(i.disposition!=="foreground-tab")return on.spawnWebBrowserView(n.plugin,!0,{url:i.url}),{action:"allow"};if(this.canvas){const a=this.canvas.createLinkNode(i.url,{x:this.node.x+this.node.width+20,y:this.node.y},{height:this.node.height,width:this.node.width});return this.canvas.deselectAll(),this.canvas.addNode(a),this.canvas.select(a),this.canvas.zoomToSelection(),this.canvas.requestSave(),{action:"allow"}}});try{const a=this.plugin.settings.highlightFormat,s=()=>{var p;let c="";const u=(p=a.match(/\{TIME\:[^\{\}\[\]]*\}/g))==null?void 0:p[0];if(u){const v=ke.moment().format(u.replace(/{TIME:([^\}]*)}/g,"$1"));return c=a.replace(u,v),c}return c};o.executeJavaScript(` - window.addEventListener('dragstart', (e) => { - if(e.ctrlKey || e.metaKey) { - e.dataTransfer.clearData(); - const selectionText = document.getSelection().toString(); - const linkToHighlight = e.srcElement.baseURI.replace(/#:~:text=(.*)/g, "") + "#:~:text=" + encodeURIComponent(selectionText); - let link = ""; - if ("${a}".includes("{TIME")) { - link = "${s()}"; - // // eslint-disable-next-line no-useless-escape - // const timeString = "${a}".match(/{TIME:[^{}[]]*}/g)?.[0]; - // if (timeString) { - // // eslint-disable-next-line no-useless-escape - // const momentTime = moment().format(timeString.replace(/{TIME:([^}]*)}/g, "$1")); - // link = "${a}".replace(timeString, momentTime); - // } - } - link = (link != "" ? link : "${a}").replace(/{URL}/g, linkToHighlight).replace(/{CONTENT}/g, selectionText.replace(/\\n/g, " ")); - - e.dataTransfer.setData('text/plain', link); - console.log(e); - } - }); - `,!0).then(c=>{})}catch{}o.on("context-menu",(i,a)=>{var p;i.preventDefault();const{Menu:s,MenuItem:c}=Un.remote,u=new s;if(u.append(new c({label:je("Open Current URL In External Browser"),click:function(){window.open(a.pageURL,"_blank")}})),u.append(new c({label:"Open Current URL In Surfing",click:function(){window.open(a.pageURL)}})),a.selectionText){const v=this.plugin.settings;u.append(new c({type:"separator"})),u.append(new c({label:je("Search Text"),click:function(){try{on.spawnWebBrowserView(n.plugin,!0,{url:a.selectionText})}catch{}}})),u.append(new c({type:"separator"})),u.append(new c({label:je("Copy Plain Text"),click:function(){try{o.copy()}catch{}}}));const h=v.highlightFormat;u.append(new c({label:je("Copy Link to Highlight"),click:function(){var m;try{const b=a.pageURL.replace(/\#\:\~\:text\=(.*)/g,"")+"#:~:text="+encodeURIComponent(a.selectionText),y=a.selectionText;let w="";if(h.contains("{TIME")){const C=(m=h.match(/\{TIME\:[^\{\}\[\]]*\}/g))==null?void 0:m[0];if(C){const S=ke.moment().format(C.replace(/{TIME:([^\}]*)}/g,"$1"));w=h.replace(C,S)}}w=(w!=""?w:h).replace(/\{URL\}/g,b).replace(/\{CONTENT\}/g,y),Un.clipboard.writeText(w)}catch{}}})),u.popup(o)}(p=a.pageURL)!=null&&p.contains("bilibili.com/")&&u.append(new c({label:je("Copy Video Timestamp"),click:function(){try{o.executeJavaScript(` - var time = document.querySelectorAll('.bpx-player-ctrl-time-current')[0].innerHTML; - var timeYMSArr=time.split(':'); - var joinTimeStr='00h00m00s'; - if(timeYMSArr.length===3){ - joinTimeStr=timeYMSArr[0]+'h'+timeYMSArr[1]+'m'+timeYMSArr[2]+'s'; - }else if(timeYMSArr.length===2){ - joinTimeStr=timeYMSArr[0]+'m'+timeYMSArr[1]+'s'; - } - var timeStr= ""; - var pageStrMatch = window.location.href.match(/(p=[1-9]{1,})/g); - var pageStr = ""; - if(typeof pageStrMatch === "object" && pageStrMatch?.length > 0){ - pageStr = '&' + pageStrMatch[0]; - }else if(typeof pageStrMatch === "string") { - pageStr = '&' + pageStrMatch; - } - timeStr = window.location.href.split('?')[0]+'?t=' + joinTimeStr + pageStr; - `,!0).then(v=>{Un.clipboard.writeText("["+v.split("?t=")[1].replace(/&p=[1-9]{1,}/g,"")+"]("+v+")")})}catch{}}})),setTimeout(()=>{u.popup(o),this.node.url!==a.pageURL&&!a.selectionText&&u.popup(o)},0)},!1)}),this.webviewEl.addEventListener("will-navigate",r=>{var o;if(this.type==="canvas"){const i=this.node.getData();i.url=r.url,this.node.setData(i),(o=this.node.canvas)==null||o.requestSave()}else this.node.url=r.url}),this.webviewEl.addEventListener("did-navigate-in-page",r=>{var o,i;if(this.type==="canvas"){const a=this.node.getData();if(r.url.contains("contacts.google.com/widget")||(o=this.node.canvas)!=null&&o.isDragging&&a.url===r.url){Un.remote.webContents.fromId(this.webviewEl.getWebContentsId()).stop();return}if(a.url===r.url)return;a.url=r.url,a.alwaysKeepLoaded=!0,this.node.setData(a),(i=this.node.canvas)==null||i.requestSave()}else this.node.url=r.url}),this.webviewEl.addEventListener("destroyed",()=>{t!==this.contentEl.doc&&(this.webviewEl.detach(),this.appendWebView())}),t.contains(this.contentEl)?this.contentEl.appendChild(this.webviewEl):this.contentEl.onNodeInserted(()=>{this.contentEl.doc===t?this.contentEl.appendChild(this.webviewEl):this.appendWebView()})}}class NO{constructor(t,n,r,o){Ce(this,"contentEl");Ce(this,"webviewEl");Ce(this,"node");Ce(this,"file");Ce(this,"menu");Ce(this,"app");Ce(this,"plugin");Ce(this,"currentUrl");this.contentEl=t.containerEl,this.node=t,this.file=n,this.app=r,this.plugin=o}load(){this.appendWebView(),this.contentEl.addClass("wb-view-content-embeded")}unload(){this.contentEl.removeClass("wb-view-content-embeded")}loadFile(t){this.load()}appendWebView(){const t=this.contentEl.doc;this.webviewEl=t.createElement("webview"),this.webviewEl.setAttribute("allowpopups",""),this.webviewEl.addClass("wb-frame");const n=this,r=this.app.vault.adapter;this.currentUrl="file:///"+(r.getBasePath()+"/"+this.file.path).toString().replace(/\s/g,"%20"),this.currentUrl?this.webviewEl.setAttribute("src",this.currentUrl):this.webviewEl.setAttribute("src",this.file.path),this.webviewEl.addEventListener("dom-ready",o=>{const i=Un.remote.webContents.fromId(this.webviewEl.getWebContentsId());i.setWindowOpenHandler(a=>{if(a.disposition!=="foreground-tab")return on.spawnWebBrowserView(n.plugin,!0,{url:a.url}),{action:"allow"}});try{const s=this.app.plugins.getPlugin("surfing").settings.highlightFormat,c=()=>{var v;let u="";const p=(v=s.match(/\{TIME\:[^\{\}\[\]]*\}/g))==null?void 0:v[0];if(p){const h=ke.moment().format(p.replace(/{TIME:([^\}]*)}/g,"$1"));return u=s.replace(p,h),u}return u};i.executeJavaScript(` - window.addEventListener('dragstart', (e) => { - if(e.ctrlKey || e.metaKey) { - e.dataTransfer.clearData(); - const selectionText = document.getSelection().toString(); - const linkToHighlight = e.srcElement.baseURI.replace(/#:~:text=(.*)/g, "") + "#:~:text=" + encodeURIComponent(selectionText); - let link = ""; - if ("${s}".includes("{TIME")) { - link = "${c()}"; - // // eslint-disable-next-line no-useless-escape - // const timeString = "${s}".match(/{TIME:[^{}[]]*}/g)?.[0]; - // if (timeString) { - // // eslint-disable-next-line no-useless-escape - // const momentTime = moment().format(timeString.replace(/{TIME:([^}]*)}/g, "$1")); - // link = "${s}".replace(timeString, momentTime); - // } - } - link = (link != "" ? link : "${s}").replace(/{URL}/g, linkToHighlight).replace(/{CONTENT}/g, selectionText.replace(/\\n/g, " ")); - - e.dataTransfer.setData('text/plain', link); - console.log(e); - } - }); - `,!0).then(u=>{})}catch{}i.on("context-menu",(a,s)=>{var p;a.preventDefault();const{Menu:c,MenuItem:u}=Un.remote;if(this.menu=new c,this.menu.append(new u({label:je("Open Current URL In External Browser"),click:function(){window.open(s.pageURL,"_blank")}})),this.menu.append(new u({label:"Open Current URL In Surfing",click:function(){window.open(s.pageURL)}})),s.selectionText){const v=this.app.plugins.getPlugin("surfing").settings;this.menu.append(new u({type:"separator"})),this.menu.append(new u({label:je("Search Text"),click:function(){try{on.spawnWebBrowserView(n.plugin,!0,{url:s.selectionText})}catch{}}})),this.menu.append(new u({type:"separator"})),this.menu.append(new u({label:je("Copy Plain Text"),click:function(){try{i.copy()}catch{}}}));const h=v.highlightFormat;this.menu.append(new u({label:je("Copy Link to Highlight"),click:function(){var m;try{const b=s.pageURL.replace(/\#\:\~\:text\=(.*)/g,"")+"#:~:text="+encodeURIComponent(s.selectionText),y=s.selectionText;let w="";if(h.contains("{TIME")){const C=(m=h.match(/\{TIME\:[^\{\}\[\]]*\}/g))==null?void 0:m[0];if(C){const S=ke.moment().format(C.replace(/{TIME:([^\}]*)}/g,"$1"));w=h.replace(C,S)}}w=(w!=""?w:h).replace(/\{URL\}/g,b).replace(/\{CONTENT\}/g,y),Un.clipboard.writeText(w)}catch{}}})),this.menu.popup(i)}(p=s.pageURL)!=null&&p.contains("bilibili.com/")&&this.menu.append(new u({label:je("Copy Video Timestamp"),click:function(){try{i.executeJavaScript(` - var time = document.querySelectorAll('.bpx-player-ctrl-time-current')[0].innerHTML; - var timeYMSArr=time.split(':'); - var joinTimeStr='00h00m00s'; - if(timeYMSArr.length===3){ - joinTimeStr=timeYMSArr[0]+'h'+timeYMSArr[1]+'m'+timeYMSArr[2]+'s'; - }else if(timeYMSArr.length===2){ - joinTimeStr=timeYMSArr[0]+'m'+timeYMSArr[1]+'s'; - } - var timeStr= ""; - var pageStrMatch = window.location.href.match(/(p=[1-9]{1,})/g); - var pageStr = ""; - if(typeof pageStrMatch === "object" && pageStrMatch?.length > 0){ - pageStr = '&' + pageStrMatch[0]; - }else if(typeof pageStrMatch === "string") { - pageStr = '&' + pageStrMatch; - } - timeStr = window.location.href.split('?')[0]+'?t=' + joinTimeStr + pageStr; - `,!0).then(v=>{Un.clipboard.writeText("["+v.split("?t=")[1].replace(/&p=[1-9]{1,}/g,"")+"]("+v+")")})}catch{}}})),setTimeout(()=>{this.menu.popup(i),this.node.url!==s.pageURL&&!s.selectionText&&this.menu.popup(i)},0)},!1)}),this.webviewEl.addEventListener("will-navigate",o=>{this.currentUrl=o.url}),this.webviewEl.addEventListener("did-navigate-in-page",o=>{this.currentUrl=o.url,this.webviewEl.setAttribute("src",o.url)}),this.webviewEl.addEventListener("destroyed",()=>{t!==this.contentEl.doc&&(this.webviewEl.detach(),this.appendWebView())}),t.contains(this.contentEl)?this.contentEl.appendChild(this.webviewEl):this.contentEl.onNodeInserted(()=>{this.contentEl.doc===t?this.contentEl.appendChild(this.webviewEl):this.appendWebView()})}}const zN=d.createContext({dragDropManager:void 0});function Lo(e){return"Minified Redux error #"+e+"; visit https://redux.js.org/Errors?code="+e+" for the full message or use the non-minified dev environment for full errors. "}var RO=function(){return typeof Symbol=="function"&&Symbol.observable||"@@observable"}(),o0=function(){return Math.random().toString(36).substring(7).split("").join(".")},DO={INIT:"@@redux/INIT"+o0(),REPLACE:"@@redux/REPLACE"+o0(),PROBE_UNKNOWN_ACTION:function(){return"@@redux/PROBE_UNKNOWN_ACTION"+o0()}};function GZ(e){if(typeof e!="object"||e===null)return!1;for(var t=e;Object.getPrototypeOf(t)!==null;)t=Object.getPrototypeOf(t);return Object.getPrototypeOf(e)===t}function HN(e,t,n){var r;if(typeof t=="function"&&typeof n=="function"||typeof n=="function"&&typeof arguments[3]=="function")throw new Error(Lo(0));if(typeof t=="function"&&typeof n>"u"&&(n=t,t=void 0),typeof n<"u"){if(typeof n!="function")throw new Error(Lo(1));return n(HN)(e,t)}if(typeof e!="function")throw new Error(Lo(2));var o=e,i=t,a=[],s=a,c=!1;function u(){s===a&&(s=a.slice())}function p(){if(c)throw new Error(Lo(3));return i}function v(y){if(typeof y!="function")throw new Error(Lo(4));if(c)throw new Error(Lo(5));var w=!0;return u(),s.push(y),function(){if(w){if(c)throw new Error(Lo(6));w=!1,u();var S=s.indexOf(y);s.splice(S,1),a=null}}}function h(y){if(!GZ(y))throw new Error(Lo(7));if(typeof y.type>"u")throw new Error(Lo(8));if(c)throw new Error(Lo(9));try{c=!0,i=o(i,y)}finally{c=!1}for(var w=a=s,C=0;Cr&&r[o]?r[o]:n||null,e)}function ZZ(e,t){return e.filter(n=>n!==t)}function FN(e){return typeof e=="object"}function JZ(e,t){const n=new Map,r=i=>{n.set(i,n.has(i)?n.get(i)+1:1)};e.forEach(r),t.forEach(r);const o=[];return n.forEach((i,a)=>{i===1&&o.push(a)}),o}function eJ(e,t){return e.filter(n=>t.indexOf(n)>-1)}const l1="dnd-core/INIT_COORDS",ch="dnd-core/BEGIN_DRAG",c1="dnd-core/PUBLISH_DRAG_SOURCE",uh="dnd-core/HOVER",dh="dnd-core/DROP",fh="dnd-core/END_DRAG";function jO(e,t){return{type:l1,payload:{sourceClientOffset:t||null,clientOffset:e||null}}}const tJ={type:l1,payload:{clientOffset:null,sourceClientOffset:null}};function nJ(e){return function(n=[],r={publishSource:!0}){const{publishSource:o=!0,clientOffset:i,getSourceClientOffset:a}=r,s=e.getMonitor(),c=e.getRegistry();e.dispatch(jO(i)),rJ(n,s,c);const u=aJ(n,s);if(u==null){e.dispatch(tJ);return}let p=null;if(i){if(!a)throw new Error("getSourceClientOffset must be defined");oJ(a),p=a(u)}e.dispatch(jO(i,p));const h=c.getSource(u).beginDrag(s,u);if(h==null)return;iJ(h),c.pinSource(u);const m=c.getSourceType(u);return{type:ch,payload:{itemType:m,item:h,sourceId:u,clientOffset:i||null,sourceClientOffset:p||null,isSourcePublic:!!o}}}}function rJ(e,t,n){nn(!t.isDragging(),"Cannot call beginDrag while dragging."),e.forEach(function(r){nn(n.getSource(r),"Expected sourceIds to be registered.")})}function oJ(e){nn(typeof e=="function","When clientOffset is provided, getSourceClientOffset must be a function.")}function iJ(e){nn(FN(e),"Item must be an object.")}function aJ(e,t){let n=null;for(let r=e.length-1;r>=0;r--)if(t.canDragSource(e[r])){n=e[r];break}return n}function sJ(e,t,n){return t in e?Object.defineProperty(e,t,{value:n,enumerable:!0,configurable:!0,writable:!0}):e[t]=n,e}function lJ(e){for(var t=1;t{const c=dJ(a,s,o,r),u={type:dh,payload:{dropResult:lJ({},n,c)}};e.dispatch(u)})}}function uJ(e){nn(e.isDragging(),"Cannot call drop while not dragging."),nn(!e.didDrop(),"Cannot call drop twice during one drag operation.")}function dJ(e,t,n,r){const o=n.getTarget(e);let i=o?o.drop(r,e):void 0;return fJ(i),typeof i>"u"&&(i=t===0?{}:r.getDropResult()),i}function fJ(e){nn(typeof e>"u"||FN(e),"Drop result must either be an object or undefined.")}function pJ(e){const t=e.getTargetIds().filter(e.canDropOnTarget,e);return t.reverse(),t}function vJ(e){return function(){const n=e.getMonitor(),r=e.getRegistry();hJ(n);const o=n.getSourceId();return o!=null&&(r.getSource(o,!0).endDrag(n,o),r.unpinSource()),{type:fh}}}function hJ(e){nn(e.isDragging(),"Cannot call endDrag while not dragging.")}function Zb(e,t){return t===null?e===null:Array.isArray(e)?e.some(n=>n===t):e===t}function gJ(e){return function(n,{clientOffset:r}={}){mJ(n);const o=n.slice(0),i=e.getMonitor(),a=e.getRegistry(),s=i.getItemType();return yJ(o,a,s),bJ(o,i,a),wJ(o,i,a),{type:uh,payload:{targetIds:o,clientOffset:r||null}}}}function mJ(e){nn(Array.isArray(e),"Expected targetIds to be an array.")}function bJ(e,t,n){nn(t.isDragging(),"Cannot call hover while not dragging."),nn(!t.didDrop(),"Cannot call hover after drop.");for(let r=0;r=0;r--){const o=e[r],i=t.getTargetType(o);Zb(i,n)||e.splice(r,1)}}function wJ(e,t,n){e.forEach(function(r){n.getTarget(r).hover(t,r)})}function xJ(e){return function(){if(e.getMonitor().isDragging())return{type:c1}}}function SJ(e){return{beginDrag:nJ(e),publishDragSource:xJ(e),hover:gJ(e),drop:cJ(e),endDrag:vJ(e)}}class CJ{receiveBackend(t){this.backend=t}getMonitor(){return this.monitor}getBackend(){return this.backend}getRegistry(){return this.monitor.registry}getActions(){const t=this,{dispatch:n}=this.store;function r(i){return(...a)=>{const s=i.apply(t,a);typeof s<"u"&&n(s)}}const o=SJ(this);return Object.keys(o).reduce((i,a)=>{const s=o[a];return i[a]=r(s),i},{})}dispatch(t){this.store.dispatch(t)}constructor(t,n){this.isSetUp=!1,this.handleRefCountChange=()=>{const r=this.store.getState().refCount>0;this.backend&&(r&&!this.isSetUp?(this.backend.setup(),this.isSetUp=!0):!r&&this.isSetUp&&(this.backend.teardown(),this.isSetUp=!1))},this.store=t,this.monitor=n,t.subscribe(this.handleRefCountChange)}}function EJ(e,t){return{x:e.x+t.x,y:e.y+t.y}}function _N(e,t){return{x:e.x-t.x,y:e.y-t.y}}function kJ(e){const{clientOffset:t,initialClientOffset:n,initialSourceClientOffset:r}=e;return!t||!n||!r?null:_N(EJ(t,r),n)}function OJ(e){const{clientOffset:t,initialClientOffset:n}=e;return!t||!n?null:_N(t,n)}const Iu=[],u1=[];Iu.__IS_NONE__=!0;u1.__IS_ALL__=!0;function $J(e,t){return e===Iu?!1:e===u1||typeof t>"u"?!0:eJ(t,e).length>0}class IJ{subscribeToStateChange(t,n={}){const{handlerIds:r}=n;nn(typeof t=="function","listener must be a function."),nn(typeof r>"u"||Array.isArray(r),"handlerIds, when specified, must be an array of strings.");let o=this.store.getState().stateId;const i=()=>{const a=this.store.getState(),s=a.stateId;try{s===o||s===o+1&&!$J(a.dirtyHandlerIds,r)||t()}finally{o=s}};return this.store.subscribe(i)}subscribeToOffsetChange(t){nn(typeof t=="function","listener must be a function.");let n=this.store.getState().dragOffset;const r=()=>{const o=this.store.getState().dragOffset;o!==n&&(n=o,t())};return this.store.subscribe(r)}canDragSource(t){if(!t)return!1;const n=this.registry.getSource(t);return nn(n,`Expected to find a valid source. sourceId=${t}`),this.isDragging()?!1:n.canDrag(this,t)}canDropOnTarget(t){if(!t)return!1;const n=this.registry.getTarget(t);if(nn(n,`Expected to find a valid target. targetId=${t}`),!this.isDragging()||this.didDrop())return!1;const r=this.registry.getTargetType(t),o=this.getItemType();return Zb(r,o)&&n.canDrop(this,t)}isDragging(){return!!this.getItemType()}isDraggingSource(t){if(!t)return!1;const n=this.registry.getSource(t,!0);if(nn(n,`Expected to find a valid source. sourceId=${t}`),!this.isDragging()||!this.isSourcePublic())return!1;const r=this.registry.getSourceType(t),o=this.getItemType();return r!==o?!1:n.isDragging(this,t)}isOverTarget(t,n={shallow:!1}){if(!t)return!1;const{shallow:r}=n;if(!this.isDragging())return!1;const o=this.registry.getTargetType(t),i=this.getItemType();if(i&&!Zb(o,i))return!1;const a=this.getTargetIds();if(!a.length)return!1;const s=a.indexOf(t);return r?s===a.length-1:s>-1}getItemType(){return this.store.getState().dragOperation.itemType}getItem(){return this.store.getState().dragOperation.item}getSourceId(){return this.store.getState().dragOperation.sourceId}getTargetIds(){return this.store.getState().dragOperation.targetIds}getDropResult(){return this.store.getState().dragOperation.dropResult}didDrop(){return this.store.getState().dragOperation.didDrop}isSourcePublic(){return!!this.store.getState().dragOperation.isSourcePublic}getInitialClientOffset(){return this.store.getState().dragOffset.initialClientOffset}getInitialSourceClientOffset(){return this.store.getState().dragOffset.initialSourceClientOffset}getClientOffset(){return this.store.getState().dragOffset.clientOffset}getSourceClientOffset(){return kJ(this.store.getState().dragOffset)}getDifferenceFromInitialOffset(){return OJ(this.store.getState().dragOffset)}constructor(t,n){this.store=t,this.registry=n}}const LO=typeof global<"u"?global:self,VN=LO.MutationObserver||LO.WebKitMutationObserver;function WN(e){return function(){const n=setTimeout(o,0),r=setInterval(o,50);function o(){clearTimeout(n),clearInterval(r),e()}}}function TJ(e){let t=1;const n=new VN(e),r=document.createTextNode("");return n.observe(r,{characterData:!0}),function(){t=-t,r.data=t}}const PJ=typeof VN=="function"?TJ:WN;class MJ{enqueueTask(t){const{queue:n,requestFlush:r}=this;n.length||(r(),this.flushing=!0),n[n.length]=t}constructor(){this.queue=[],this.pendingErrors=[],this.flushing=!1,this.index=0,this.capacity=1024,this.flush=()=>{const{queue:t}=this;for(;this.indexthis.capacity){for(let r=0,o=t.length-this.index;r{this.pendingErrors.push(t),this.requestErrorThrow()},this.requestFlush=PJ(this.flush),this.requestErrorThrow=WN(()=>{if(this.pendingErrors.length)throw this.pendingErrors.shift()})}}class NJ{call(){try{this.task&&this.task()}catch(t){this.onError(t)}finally{this.task=null,this.release(this)}}constructor(t,n){this.onError=t,this.release=n,this.task=null}}class RJ{create(t){const n=this.freeTasks,r=n.length?n.pop():new NJ(this.onError,o=>n[n.length]=o);return r.task=t,r}constructor(t){this.onError=t,this.freeTasks=[]}}const UN=new MJ,DJ=new RJ(UN.registerPendingError);function jJ(e){UN.enqueueTask(DJ.create(e))}const d1="dnd-core/ADD_SOURCE",f1="dnd-core/ADD_TARGET",p1="dnd-core/REMOVE_SOURCE",ph="dnd-core/REMOVE_TARGET";function LJ(e){return{type:d1,payload:{sourceId:e}}}function BJ(e){return{type:f1,payload:{targetId:e}}}function AJ(e){return{type:p1,payload:{sourceId:e}}}function zJ(e){return{type:ph,payload:{targetId:e}}}function HJ(e){nn(typeof e.canDrag=="function","Expected canDrag to be a function."),nn(typeof e.beginDrag=="function","Expected beginDrag to be a function."),nn(typeof e.endDrag=="function","Expected endDrag to be a function.")}function FJ(e){nn(typeof e.canDrop=="function","Expected canDrop to be a function."),nn(typeof e.hover=="function","Expected hover to be a function."),nn(typeof e.drop=="function","Expected beginDrag to be a function.")}function Jb(e,t){if(t&&Array.isArray(e)){e.forEach(n=>Jb(n,!1));return}nn(typeof e=="string"||typeof e=="symbol",t?"Type can only be a string, a symbol, or an array of either.":"Type can only be a string or a symbol.")}var Ho;(function(e){e.SOURCE="SOURCE",e.TARGET="TARGET"})(Ho||(Ho={}));let _J=0;function VJ(){return _J++}function WJ(e){const t=VJ().toString();switch(e){case Ho.SOURCE:return`S${t}`;case Ho.TARGET:return`T${t}`;default:throw new Error(`Unknown Handler Role: ${e}`)}}function BO(e){switch(e[0]){case"S":return Ho.SOURCE;case"T":return Ho.TARGET;default:throw new Error(`Cannot parse handler ID: ${e}`)}}function AO(e,t){const n=e.entries();let r=!1;do{const{done:o,value:[,i]}=n.next();if(i===t)return!0;r=!!o}while(!r);return!1}class UJ{addSource(t,n){Jb(t),HJ(n);const r=this.addHandler(Ho.SOURCE,t,n);return this.store.dispatch(LJ(r)),r}addTarget(t,n){Jb(t,!0),FJ(n);const r=this.addHandler(Ho.TARGET,t,n);return this.store.dispatch(BJ(r)),r}containsHandler(t){return AO(this.dragSources,t)||AO(this.dropTargets,t)}getSource(t,n=!1){return nn(this.isSourceId(t),"Expected a valid source ID."),n&&t===this.pinnedSourceId?this.pinnedSource:this.dragSources.get(t)}getTarget(t){return nn(this.isTargetId(t),"Expected a valid target ID."),this.dropTargets.get(t)}getSourceType(t){return nn(this.isSourceId(t),"Expected a valid source ID."),this.types.get(t)}getTargetType(t){return nn(this.isTargetId(t),"Expected a valid target ID."),this.types.get(t)}isSourceId(t){return BO(t)===Ho.SOURCE}isTargetId(t){return BO(t)===Ho.TARGET}removeSource(t){nn(this.getSource(t),"Expected an existing source."),this.store.dispatch(AJ(t)),jJ(()=>{this.dragSources.delete(t),this.types.delete(t)})}removeTarget(t){nn(this.getTarget(t),"Expected an existing target."),this.store.dispatch(zJ(t)),this.dropTargets.delete(t),this.types.delete(t)}pinSource(t){const n=this.getSource(t);nn(n,"Expected an existing source."),this.pinnedSourceId=t,this.pinnedSource=n}unpinSource(){nn(this.pinnedSource,"No source is pinned at the time."),this.pinnedSourceId=null,this.pinnedSource=null}addHandler(t,n,r){const o=WJ(t);return this.types.set(o,n),t===Ho.SOURCE?this.dragSources.set(o,r):t===Ho.TARGET&&this.dropTargets.set(o,r),o}constructor(t){this.types=new Map,this.dragSources=new Map,this.dropTargets=new Map,this.pinnedSourceId=null,this.pinnedSource=null,this.store=t}}const KJ=(e,t)=>e===t;function qJ(e,t){return!e&&!t?!0:!e||!t?!1:e.x===t.x&&e.y===t.y}function XJ(e,t,n=KJ){if(e.length!==t.length)return!1;for(let r=0;r0||!XJ(n,r)))return Iu;const a=r[r.length-1],s=n[n.length-1];return a!==s&&(a&&o.push(a),s&&o.push(s)),o}function YJ(e,t,n){return t in e?Object.defineProperty(e,t,{value:n,enumerable:!0,configurable:!0,writable:!0}):e[t]=n,e}function QJ(e){for(var t=1;t=0)&&Object.prototype.propertyIsEnumerable.call(e,r)&&(n[r]=e[r])}return n}function uee(e,t){if(e==null)return{};var n={},r=Object.keys(e),o,i;for(i=0;i=0)&&(n[o]=e[o]);return n}let HO=0;const Lp=Symbol.for("__REACT_DND_CONTEXT_INSTANCE__");var dee=d.memo(function(t){var{children:n}=t,r=cee(t,["children"]);const[o,i]=fee(r);return d.useEffect(()=>{if(i){const a=KN();return++HO,()=>{--HO===0&&(a[Lp]=null)}}},[]),it.jsx(zN.Provider,{value:o,children:n})});function fee(e){if("manager"in e)return[{dragDropManager:e.manager},!1];const t=pee(e.backend,e.context,e.options,e.debugMode),n=!e.context;return[t,n]}function pee(e,t=KN(),n,r){const o=t;return o[Lp]||(o[Lp]={dragDropManager:see(e,t,n,r)}),o[Lp]}function KN(){return typeof global<"u"?global:window}var vee=function e(t,n){if(t===n)return!0;if(t&&n&&typeof t=="object"&&typeof n=="object"){if(t.constructor!==n.constructor)return!1;var r,o,i;if(Array.isArray(t)){if(r=t.length,r!=n.length)return!1;for(o=r;o--!==0;)if(!e(t[o],n[o]))return!1;return!0}if(t.constructor===RegExp)return t.source===n.source&&t.flags===n.flags;if(t.valueOf!==Object.prototype.valueOf)return t.valueOf()===n.valueOf();if(t.toString!==Object.prototype.toString)return t.toString()===n.toString();if(i=Object.keys(t),r=i.length,r!==Object.keys(n).length)return!1;for(o=r;o--!==0;)if(!Object.prototype.hasOwnProperty.call(n,i[o]))return!1;for(o=r;o--!==0;){var a=i[o];if(!e(t[a],n[a]))return!1}return!0}return t!==t&&n!==n};const hee=js(vee),Ds=typeof window<"u"?d.useLayoutEffect:d.useEffect;function qN(e,t,n){const[r,o]=d.useState(()=>t(e)),i=d.useCallback(()=>{const a=t(e);hee(r,a)||(o(a),n&&n())},[r,e,n]);return Ds(i),[r,i]}function gee(e,t,n){const[r,o]=qN(e,t,n);return Ds(function(){const a=e.getHandlerId();if(a!=null)return e.subscribeToStateChange(o,{handlerIds:[a]})},[e,o]),r}function XN(e,t,n){return gee(t,e||(()=>({})),()=>n.reconnect())}function GN(e,t){const n=[];return typeof e!="function"&&n.push(e),d.useMemo(()=>typeof e=="function"?e():e,n)}function mee(e){return d.useMemo(()=>e.hooks.dragSource(),[e])}function bee(e){return d.useMemo(()=>e.hooks.dragPreview(),[e])}let i0=!1,a0=!1;class yee{receiveHandlerId(t){this.sourceId=t}getHandlerId(){return this.sourceId}canDrag(){nn(!i0,"You may not call monitor.canDrag() inside your canDrag() implementation. Read more: http://react-dnd.github.io/react-dnd/docs/api/drag-source-monitor");try{return i0=!0,this.internalMonitor.canDragSource(this.sourceId)}finally{i0=!1}}isDragging(){if(!this.sourceId)return!1;nn(!a0,"You may not call monitor.isDragging() inside your isDragging() implementation. Read more: http://react-dnd.github.io/react-dnd/docs/api/drag-source-monitor");try{return a0=!0,this.internalMonitor.isDraggingSource(this.sourceId)}finally{a0=!1}}subscribeToStateChange(t,n){return this.internalMonitor.subscribeToStateChange(t,n)}isDraggingSource(t){return this.internalMonitor.isDraggingSource(t)}isOverTarget(t,n){return this.internalMonitor.isOverTarget(t,n)}getTargetIds(){return this.internalMonitor.getTargetIds()}isSourcePublic(){return this.internalMonitor.isSourcePublic()}getSourceId(){return this.internalMonitor.getSourceId()}subscribeToOffsetChange(t){return this.internalMonitor.subscribeToOffsetChange(t)}canDragSource(t){return this.internalMonitor.canDragSource(t)}canDropOnTarget(t){return this.internalMonitor.canDropOnTarget(t)}getItemType(){return this.internalMonitor.getItemType()}getItem(){return this.internalMonitor.getItem()}getDropResult(){return this.internalMonitor.getDropResult()}didDrop(){return this.internalMonitor.didDrop()}getInitialClientOffset(){return this.internalMonitor.getInitialClientOffset()}getInitialSourceClientOffset(){return this.internalMonitor.getInitialSourceClientOffset()}getSourceClientOffset(){return this.internalMonitor.getSourceClientOffset()}getClientOffset(){return this.internalMonitor.getClientOffset()}getDifferenceFromInitialOffset(){return this.internalMonitor.getDifferenceFromInitialOffset()}constructor(t){this.sourceId=null,this.internalMonitor=t.getMonitor()}}let s0=!1;class wee{receiveHandlerId(t){this.targetId=t}getHandlerId(){return this.targetId}subscribeToStateChange(t,n){return this.internalMonitor.subscribeToStateChange(t,n)}canDrop(){if(!this.targetId)return!1;nn(!s0,"You may not call monitor.canDrop() inside your canDrop() implementation. Read more: http://react-dnd.github.io/react-dnd/docs/api/drop-target-monitor");try{return s0=!0,this.internalMonitor.canDropOnTarget(this.targetId)}finally{s0=!1}}isOver(t){return this.targetId?this.internalMonitor.isOverTarget(this.targetId,t):!1}getItemType(){return this.internalMonitor.getItemType()}getItem(){return this.internalMonitor.getItem()}getDropResult(){return this.internalMonitor.getDropResult()}didDrop(){return this.internalMonitor.didDrop()}getInitialClientOffset(){return this.internalMonitor.getInitialClientOffset()}getInitialSourceClientOffset(){return this.internalMonitor.getInitialSourceClientOffset()}getSourceClientOffset(){return this.internalMonitor.getSourceClientOffset()}getClientOffset(){return this.internalMonitor.getClientOffset()}getDifferenceFromInitialOffset(){return this.internalMonitor.getDifferenceFromInitialOffset()}constructor(t){this.targetId=null,this.internalMonitor=t.getMonitor()}}function xee(e,t,n){const r=n.getRegistry(),o=r.addTarget(e,t);return[o,()=>r.removeTarget(o)]}function See(e,t,n){const r=n.getRegistry(),o=r.addSource(e,t);return[o,()=>r.removeSource(o)]}function ey(e,t,n,r){let o;if(o!==void 0)return!!o;if(e===t)return!0;if(typeof e!="object"||!e||typeof t!="object"||!t)return!1;const i=Object.keys(e),a=Object.keys(t);if(i.length!==a.length)return!1;const s=Object.prototype.hasOwnProperty.bind(t);for(let c=0;c, or turn it into a drag source or a drop target itself.`)}function Eee(e){return(t=null,n=null)=>{if(!d.isValidElement(t)){const i=t;return e(i,n),i}const r=t;return Cee(r),kee(r,n?i=>e(i,n):e)}}function YN(e){const t={};return Object.keys(e).forEach(n=>{const r=e[n];if(n.endsWith("Ref"))t[n]=e[n];else{const o=Eee(r);t[n]=()=>o}}),t}function FO(e,t){typeof e=="function"?e(t):e.current=t}function kee(e,t){const n=e.ref;return nn(typeof n!="string","Cannot connect React DnD to an element with an existing string ref. Please convert it to use a callback ref instead, or wrap it into a or
. Read more: https://reactjs.org/docs/refs-and-the-dom.html#callback-refs"),n?d.cloneElement(e,{ref:r=>{FO(n,r),FO(t,r)}}):d.cloneElement(e,{ref:t})}class Oee{receiveHandlerId(t){this.handlerId!==t&&(this.handlerId=t,this.reconnect())}get connectTarget(){return this.dragSource}get dragSourceOptions(){return this.dragSourceOptionsInternal}set dragSourceOptions(t){this.dragSourceOptionsInternal=t}get dragPreviewOptions(){return this.dragPreviewOptionsInternal}set dragPreviewOptions(t){this.dragPreviewOptionsInternal=t}reconnect(){const t=this.reconnectDragSource();this.reconnectDragPreview(t)}reconnectDragSource(){const t=this.dragSource,n=this.didHandlerIdChange()||this.didConnectedDragSourceChange()||this.didDragSourceOptionsChange();return n&&this.disconnectDragSource(),this.handlerId?t?(n&&(this.lastConnectedHandlerId=this.handlerId,this.lastConnectedDragSource=t,this.lastConnectedDragSourceOptions=this.dragSourceOptions,this.dragSourceUnsubscribe=this.backend.connectDragSource(this.handlerId,t,this.dragSourceOptions)),n):(this.lastConnectedDragSource=t,n):n}reconnectDragPreview(t=!1){const n=this.dragPreview,r=t||this.didHandlerIdChange()||this.didConnectedDragPreviewChange()||this.didDragPreviewOptionsChange();if(r&&this.disconnectDragPreview(),!!this.handlerId){if(!n){this.lastConnectedDragPreview=n;return}r&&(this.lastConnectedHandlerId=this.handlerId,this.lastConnectedDragPreview=n,this.lastConnectedDragPreviewOptions=this.dragPreviewOptions,this.dragPreviewUnsubscribe=this.backend.connectDragPreview(this.handlerId,n,this.dragPreviewOptions))}}didHandlerIdChange(){return this.lastConnectedHandlerId!==this.handlerId}didConnectedDragSourceChange(){return this.lastConnectedDragSource!==this.dragSource}didConnectedDragPreviewChange(){return this.lastConnectedDragPreview!==this.dragPreview}didDragSourceOptionsChange(){return!ey(this.lastConnectedDragSourceOptions,this.dragSourceOptions)}didDragPreviewOptionsChange(){return!ey(this.lastConnectedDragPreviewOptions,this.dragPreviewOptions)}disconnectDragSource(){this.dragSourceUnsubscribe&&(this.dragSourceUnsubscribe(),this.dragSourceUnsubscribe=void 0)}disconnectDragPreview(){this.dragPreviewUnsubscribe&&(this.dragPreviewUnsubscribe(),this.dragPreviewUnsubscribe=void 0,this.dragPreviewNode=null,this.dragPreviewRef=null)}get dragSource(){return this.dragSourceNode||this.dragSourceRef&&this.dragSourceRef.current}get dragPreview(){return this.dragPreviewNode||this.dragPreviewRef&&this.dragPreviewRef.current}clearDragSource(){this.dragSourceNode=null,this.dragSourceRef=null}clearDragPreview(){this.dragPreviewNode=null,this.dragPreviewRef=null}constructor(t){this.hooks=YN({dragSource:(n,r)=>{this.clearDragSource(),this.dragSourceOptions=r||null,ty(n)?this.dragSourceRef=n:this.dragSourceNode=n,this.reconnectDragSource()},dragPreview:(n,r)=>{this.clearDragPreview(),this.dragPreviewOptions=r||null,ty(n)?this.dragPreviewRef=n:this.dragPreviewNode=n,this.reconnectDragPreview()}}),this.handlerId=null,this.dragSourceRef=null,this.dragSourceOptionsInternal=null,this.dragPreviewRef=null,this.dragPreviewOptionsInternal=null,this.lastConnectedHandlerId=null,this.lastConnectedDragSource=null,this.lastConnectedDragSourceOptions=null,this.lastConnectedDragPreview=null,this.lastConnectedDragPreviewOptions=null,this.backend=t}}class $ee{get connectTarget(){return this.dropTarget}reconnect(){const t=this.didHandlerIdChange()||this.didDropTargetChange()||this.didOptionsChange();t&&this.disconnectDropTarget();const n=this.dropTarget;if(this.handlerId){if(!n){this.lastConnectedDropTarget=n;return}t&&(this.lastConnectedHandlerId=this.handlerId,this.lastConnectedDropTarget=n,this.lastConnectedDropTargetOptions=this.dropTargetOptions,this.unsubscribeDropTarget=this.backend.connectDropTarget(this.handlerId,n,this.dropTargetOptions))}}receiveHandlerId(t){t!==this.handlerId&&(this.handlerId=t,this.reconnect())}get dropTargetOptions(){return this.dropTargetOptionsInternal}set dropTargetOptions(t){this.dropTargetOptionsInternal=t}didHandlerIdChange(){return this.lastConnectedHandlerId!==this.handlerId}didDropTargetChange(){return this.lastConnectedDropTarget!==this.dropTarget}didOptionsChange(){return!ey(this.lastConnectedDropTargetOptions,this.dropTargetOptions)}disconnectDropTarget(){this.unsubscribeDropTarget&&(this.unsubscribeDropTarget(),this.unsubscribeDropTarget=void 0)}get dropTarget(){return this.dropTargetNode||this.dropTargetRef&&this.dropTargetRef.current}clearDropTarget(){this.dropTargetRef=null,this.dropTargetNode=null}constructor(t){this.hooks=YN({dropTarget:(n,r)=>{this.clearDropTarget(),this.dropTargetOptions=r,ty(n)?this.dropTargetRef=n:this.dropTargetNode=n,this.reconnect()}}),this.handlerId=null,this.dropTargetRef=null,this.dropTargetOptionsInternal=null,this.lastConnectedHandlerId=null,this.lastConnectedDropTarget=null,this.lastConnectedDropTargetOptions=null,this.backend=t}}function ua(){const{dragDropManager:e}=d.useContext(zN);return nn(e!=null,"Expected drag drop context"),e}function Iee(e,t){const n=ua(),r=d.useMemo(()=>new Oee(n.getBackend()),[n]);return Ds(()=>(r.dragSourceOptions=e||null,r.reconnect(),()=>r.disconnectDragSource()),[r,e]),Ds(()=>(r.dragPreviewOptions=t||null,r.reconnect(),()=>r.disconnectDragPreview()),[r,t]),r}function Tee(){const e=ua();return d.useMemo(()=>new yee(e),[e])}class Pee{beginDrag(){const t=this.spec,n=this.monitor;let r=null;return typeof t.item=="object"?r=t.item:typeof t.item=="function"?r=t.item(n):r={},r??null}canDrag(){const t=this.spec,n=this.monitor;return typeof t.canDrag=="boolean"?t.canDrag:typeof t.canDrag=="function"?t.canDrag(n):!0}isDragging(t,n){const r=this.spec,o=this.monitor,{isDragging:i}=r;return i?i(o):n===t.getSourceId()}endDrag(){const t=this.spec,n=this.monitor,r=this.connector,{end:o}=t;o&&o(n.getItem(),n),r.reconnect()}constructor(t,n,r){this.spec=t,this.monitor=n,this.connector=r}}function Mee(e,t,n){const r=d.useMemo(()=>new Pee(e,t,n),[t,n]);return d.useEffect(()=>{r.spec=e},[e]),r}function Nee(e){return d.useMemo(()=>{const t=e.type;return nn(t!=null,"spec.type must be defined"),t},[e])}function Ree(e,t,n){const r=ua(),o=Mee(e,t,n),i=Nee(e);Ds(function(){if(i!=null){const[s,c]=See(i,o,r);return t.receiveHandlerId(s),n.receiveHandlerId(s),c}},[r,t,n,o,i])}function Dee(e,t){const n=GN(e);nn(!n.begin,"useDrag::spec.begin was deprecated in v14. Replace spec.begin() with spec.item(). (see more here - https://react-dnd.github.io/react-dnd/docs/api/use-drag)");const r=Tee(),o=Iee(n.options,n.previewOptions);return Ree(n,r,o),[XN(n.collect,r,o),mee(o),bee(o)]}function jee(e){const n=ua().getMonitor(),[r,o]=qN(n,e);return d.useEffect(()=>n.subscribeToOffsetChange(o)),d.useEffect(()=>n.subscribeToStateChange(o)),r}function Lee(e){return d.useMemo(()=>e.hooks.dropTarget(),[e])}function Bee(e){const t=ua(),n=d.useMemo(()=>new $ee(t.getBackend()),[t]);return Ds(()=>(n.dropTargetOptions=e||null,n.reconnect(),()=>n.disconnectDropTarget()),[e]),n}function Aee(){const e=ua();return d.useMemo(()=>new wee(e),[e])}function zee(e){const{accept:t}=e;return d.useMemo(()=>(nn(e.accept!=null,"accept must be defined"),Array.isArray(t)?t:[t]),[t])}class Hee{canDrop(){const t=this.spec,n=this.monitor;return t.canDrop?t.canDrop(n.getItem(),n):!0}hover(){const t=this.spec,n=this.monitor;t.hover&&t.hover(n.getItem(),n)}drop(){const t=this.spec,n=this.monitor;if(t.drop)return t.drop(n.getItem(),n)}constructor(t,n){this.spec=t,this.monitor=n}}function Fee(e,t){const n=d.useMemo(()=>new Hee(e,t),[t]);return d.useEffect(()=>{n.spec=e},[e]),n}function _ee(e,t,n){const r=ua(),o=Fee(e,t),i=zee(e);Ds(function(){const[s,c]=xee(i,o,r);return t.receiveHandlerId(s),n.receiveHandlerId(s),c},[r,t,o,n,i.map(a=>a.toString()).join("|")])}function QN(e,t){const n=GN(e),r=Aee(),o=Bee(n.options);return _ee(n,r,o),[XN(n.collect,r,o),Lee(o)]}function ZN(e){let t=null;return()=>(t==null&&(t=e()),t)}function Vee(e,t){return e.filter(n=>n!==t)}function Wee(e,t){const n=new Set,r=i=>n.add(i);e.forEach(r),t.forEach(r);const o=[];return n.forEach(i=>o.push(i)),o}class Uee{enter(t){const n=this.entered.length,r=o=>this.isNodeInDocument(o)&&(!o.contains||o.contains(t));return this.entered=Wee(this.entered.filter(r),[t]),n===0&&this.entered.length>0}leave(t){const n=this.entered.length;return this.entered=Vee(this.entered.filter(this.isNodeInDocument),t),n>0&&this.entered.length===0}reset(){this.entered=[]}constructor(t){this.entered=[],this.isNodeInDocument=t}}class Kee{initializeExposedProperties(){Object.keys(this.config.exposeProperties).forEach(t=>{Object.defineProperty(this.item,t,{configurable:!0,enumerable:!0,get(){return null}})})}loadDataTransfer(t){if(t){const n={};Object.keys(this.config.exposeProperties).forEach(r=>{const o=this.config.exposeProperties[r];o!=null&&(n[r]={value:o(t,this.config.matchesTypes),configurable:!0,enumerable:!0})}),Object.defineProperties(this.item,n)}}canDrag(){return!0}beginDrag(){return this.item}isDragging(t,n){return n===t.getSourceId()}endDrag(){}constructor(t){this.config=t,this.item={},this.initializeExposedProperties()}}const JN="__NATIVE_FILE__",eR="__NATIVE_URL__",tR="__NATIVE_TEXT__",nR="__NATIVE_HTML__",_O=Object.freeze(Object.defineProperty({__proto__:null,FILE:JN,HTML:nR,TEXT:tR,URL:eR},Symbol.toStringTag,{value:"Module"}));function l0(e,t,n){const r=t.reduce((o,i)=>o||e.getData(i),"");return r??n}const ny={[JN]:{exposeProperties:{files:e=>Array.prototype.slice.call(e.files),items:e=>e.items,dataTransfer:e=>e},matchesTypes:["Files"]},[nR]:{exposeProperties:{html:(e,t)=>l0(e,t,""),dataTransfer:e=>e},matchesTypes:["Html","text/html"]},[eR]:{exposeProperties:{urls:(e,t)=>l0(e,t,"").split(` -`),dataTransfer:e=>e},matchesTypes:["Url","text/uri-list"]},[tR]:{exposeProperties:{text:(e,t)=>l0(e,t,""),dataTransfer:e=>e},matchesTypes:["Text","text/plain"]}};function qee(e,t){const n=ny[e];if(!n)throw new Error(`native type ${e} has no configuration`);const r=new Kee(n);return r.loadDataTransfer(t),r}function c0(e){if(!e)return null;const t=Array.prototype.slice.call(e.types||[]);return Object.keys(ny).filter(n=>{const r=ny[n];return r!=null&&r.matchesTypes?r.matchesTypes.some(o=>t.indexOf(o)>-1):!1})[0]||null}const Xee=ZN(()=>/firefox/i.test(navigator.userAgent)),rR=ZN(()=>!!window.safari);class VO{interpolate(t){const{xs:n,ys:r,c1s:o,c2s:i,c3s:a}=this;let s=n.length-1;if(t===n[s])return r[s];let c=0,u=a.length-1,p;for(;c<=u;){p=Math.floor(.5*(c+u));const m=n[p];if(mt)u=p-1;else return r[p]}s=Math.max(0,u);const v=t-n[s],h=v*v;return r[s]+o[s]*v+i[s]*h+a[s]*v*h}constructor(t,n){const{length:r}=t,o=[];for(let m=0;mt[m]{let $=new VO([0,.5,1],[c.y,c.y/p*b,c.y+b-p]).interpolate(h);return rR()&&i&&($+=(window.devicePixelRatio-1)*b),$},w=()=>new VO([0,.5,1],[c.x,c.x/u*m,c.x+m-u]).interpolate(v),{offsetX:C,offsetY:S}=o,E=C===0||C,k=S===0||S;return{x:E?C:w(),y:k?S:y()}}let Jee=class{get window(){if(this.globalContext)return this.globalContext;if(typeof window<"u")return window}get document(){var t;return!((t=this.globalContext)===null||t===void 0)&&t.document?this.globalContext.document:this.window?this.window.document:void 0}get rootElement(){var t;return((t=this.optionsArgs)===null||t===void 0?void 0:t.rootElement)||this.window}constructor(t,n){this.ownerDocument=null,this.globalContext=t,this.optionsArgs=n}};function ete(e,t,n){return t in e?Object.defineProperty(e,t,{value:n,enumerable:!0,configurable:!0,writable:!0}):e[t]=n,e}function WO(e){for(var t=1;t{this.sourcePreviewNodes.delete(t),this.sourcePreviewNodeOptions.delete(t)}}connectDragSource(t,n,r){this.sourceNodes.set(t,n),this.sourceNodeOptions.set(t,r);const o=a=>this.handleDragStart(a,t),i=a=>this.handleSelectStart(a);return n.setAttribute("draggable","true"),n.addEventListener("dragstart",o),n.addEventListener("selectstart",i),()=>{this.sourceNodes.delete(t),this.sourceNodeOptions.delete(t),n.removeEventListener("dragstart",o),n.removeEventListener("selectstart",i),n.setAttribute("draggable","false")}}connectDropTarget(t,n){const r=a=>this.handleDragEnter(a,t),o=a=>this.handleDragOver(a,t),i=a=>this.handleDrop(a,t);return n.addEventListener("dragenter",r),n.addEventListener("dragover",o),n.addEventListener("drop",i),()=>{n.removeEventListener("dragenter",r),n.removeEventListener("dragover",o),n.removeEventListener("drop",i)}}addEventListeners(t){t.addEventListener&&(t.addEventListener("dragstart",this.handleTopDragStart),t.addEventListener("dragstart",this.handleTopDragStartCapture,!0),t.addEventListener("dragend",this.handleTopDragEndCapture,!0),t.addEventListener("dragenter",this.handleTopDragEnter),t.addEventListener("dragenter",this.handleTopDragEnterCapture,!0),t.addEventListener("dragleave",this.handleTopDragLeaveCapture,!0),t.addEventListener("dragover",this.handleTopDragOver),t.addEventListener("dragover",this.handleTopDragOverCapture,!0),t.addEventListener("drop",this.handleTopDrop),t.addEventListener("drop",this.handleTopDropCapture,!0))}removeEventListeners(t){t.removeEventListener&&(t.removeEventListener("dragstart",this.handleTopDragStart),t.removeEventListener("dragstart",this.handleTopDragStartCapture,!0),t.removeEventListener("dragend",this.handleTopDragEndCapture,!0),t.removeEventListener("dragenter",this.handleTopDragEnter),t.removeEventListener("dragenter",this.handleTopDragEnterCapture,!0),t.removeEventListener("dragleave",this.handleTopDragLeaveCapture,!0),t.removeEventListener("dragover",this.handleTopDragOver),t.removeEventListener("dragover",this.handleTopDragOverCapture,!0),t.removeEventListener("drop",this.handleTopDrop),t.removeEventListener("drop",this.handleTopDropCapture,!0))}getCurrentSourceNodeOptions(){const t=this.monitor.getSourceId(),n=this.sourceNodeOptions.get(t);return WO({dropEffect:this.altKeyPressed?"copy":"move"},n||{})}getCurrentDropEffect(){return this.isDraggingNativeItem()?"copy":this.getCurrentSourceNodeOptions().dropEffect}getCurrentSourcePreviewNodeOptions(){const t=this.monitor.getSourceId(),n=this.sourcePreviewNodeOptions.get(t);return WO({anchorX:.5,anchorY:.5,captureDraggingState:!1},n||{})}isDraggingNativeItem(){const t=this.monitor.getItemType();return Object.keys(_O).some(n=>_O[n]===t)}beginDragNativeItem(t,n){this.clearCurrentDragSourceNode(),this.currentNativeSource=qee(t,n),this.currentNativeHandle=this.registry.addSource(t,this.currentNativeSource),this.actions.beginDrag([this.currentNativeHandle])}setCurrentDragSourceNode(t){this.clearCurrentDragSourceNode(),this.currentDragSourceNode=t;const n=1e3;this.mouseMoveTimeoutTimer=setTimeout(()=>{var r;return(r=this.rootElement)===null||r===void 0?void 0:r.addEventListener("mousemove",this.endDragIfSourceWasRemovedFromDOM,!0)},n)}clearCurrentDragSourceNode(){if(this.currentDragSourceNode){if(this.currentDragSourceNode=null,this.rootElement){var t;(t=this.window)===null||t===void 0||t.clearTimeout(this.mouseMoveTimeoutTimer||void 0),this.rootElement.removeEventListener("mousemove",this.endDragIfSourceWasRemovedFromDOM,!0)}return this.mouseMoveTimeoutTimer=null,!0}return!1}handleDragStart(t,n){t.defaultPrevented||(this.dragStartSourceIds||(this.dragStartSourceIds=[]),this.dragStartSourceIds.unshift(n))}handleDragEnter(t,n){this.dragEnterTargetIds.unshift(n)}handleDragOver(t,n){this.dragOverTargetIds===null&&(this.dragOverTargetIds=[]),this.dragOverTargetIds.unshift(n)}handleDrop(t,n){this.dropTargetIds.unshift(n)}constructor(t,n,r){this.sourcePreviewNodes=new Map,this.sourcePreviewNodeOptions=new Map,this.sourceNodes=new Map,this.sourceNodeOptions=new Map,this.dragStartSourceIds=null,this.dropTargetIds=[],this.dragEnterTargetIds=[],this.currentNativeSource=null,this.currentNativeHandle=null,this.currentDragSourceNode=null,this.altKeyPressed=!1,this.mouseMoveTimeoutTimer=null,this.asyncEndDragFrameId=null,this.dragOverTargetIds=null,this.lastClientOffset=null,this.hoverRafId=null,this.getSourceClientOffset=o=>{const i=this.sourceNodes.get(o);return i&&oR(i)||null},this.endDragNativeItem=()=>{this.isDraggingNativeItem()&&(this.actions.endDrag(),this.currentNativeHandle&&this.registry.removeSource(this.currentNativeHandle),this.currentNativeHandle=null,this.currentNativeSource=null)},this.isNodeInDocument=o=>!!(o&&this.document&&this.document.body&&this.document.body.contains(o)),this.endDragIfSourceWasRemovedFromDOM=()=>{const o=this.currentDragSourceNode;o==null||this.isNodeInDocument(o)||(this.clearCurrentDragSourceNode()&&this.monitor.isDragging()&&this.actions.endDrag(),this.cancelHover())},this.scheduleHover=o=>{this.hoverRafId===null&&typeof requestAnimationFrame<"u"&&(this.hoverRafId=requestAnimationFrame(()=>{this.monitor.isDragging()&&this.actions.hover(o||[],{clientOffset:this.lastClientOffset}),this.hoverRafId=null}))},this.cancelHover=()=>{this.hoverRafId!==null&&typeof cancelAnimationFrame<"u"&&(cancelAnimationFrame(this.hoverRafId),this.hoverRafId=null)},this.handleTopDragStartCapture=()=>{this.clearCurrentDragSourceNode(),this.dragStartSourceIds=[]},this.handleTopDragStart=o=>{if(o.defaultPrevented)return;const{dragStartSourceIds:i}=this;this.dragStartSourceIds=null;const a=lp(o);this.monitor.isDragging()&&(this.actions.endDrag(),this.cancelHover()),this.actions.beginDrag(i||[],{publishSource:!1,getSourceClientOffset:this.getSourceClientOffset,clientOffset:a});const{dataTransfer:s}=o,c=c0(s);if(this.monitor.isDragging()){if(s&&typeof s.setDragImage=="function"){const p=this.monitor.getSourceId(),v=this.sourceNodes.get(p),h=this.sourcePreviewNodes.get(p)||v;if(h){const{anchorX:m,anchorY:b,offsetX:y,offsetY:w}=this.getCurrentSourcePreviewNodeOptions(),E=Zee(v,h,a,{anchorX:m,anchorY:b},{offsetX:y,offsetY:w});s.setDragImage(h,E.x,E.y)}}try{s==null||s.setData("application/json",{})}catch{}this.setCurrentDragSourceNode(o.target);const{captureDraggingState:u}=this.getCurrentSourcePreviewNodeOptions();u?this.actions.publishDragSource():setTimeout(()=>this.actions.publishDragSource(),0)}else if(c)this.beginDragNativeItem(c);else{if(s&&!s.types&&(o.target&&!o.target.hasAttribute||!o.target.hasAttribute("draggable")))return;o.preventDefault()}},this.handleTopDragEndCapture=()=>{this.clearCurrentDragSourceNode()&&this.monitor.isDragging()&&this.actions.endDrag(),this.cancelHover()},this.handleTopDragEnterCapture=o=>{if(this.dragEnterTargetIds=[],this.isDraggingNativeItem()){var i;(i=this.currentNativeSource)===null||i===void 0||i.loadDataTransfer(o.dataTransfer)}if(!this.enterLeaveCounter.enter(o.target)||this.monitor.isDragging())return;const{dataTransfer:s}=o,c=c0(s);c&&this.beginDragNativeItem(c,s)},this.handleTopDragEnter=o=>{const{dragEnterTargetIds:i}=this;if(this.dragEnterTargetIds=[],!this.monitor.isDragging())return;this.altKeyPressed=o.altKey,i.length>0&&this.actions.hover(i,{clientOffset:lp(o)}),i.some(s=>this.monitor.canDropOnTarget(s))&&(o.preventDefault(),o.dataTransfer&&(o.dataTransfer.dropEffect=this.getCurrentDropEffect()))},this.handleTopDragOverCapture=o=>{if(this.dragOverTargetIds=[],this.isDraggingNativeItem()){var i;(i=this.currentNativeSource)===null||i===void 0||i.loadDataTransfer(o.dataTransfer)}},this.handleTopDragOver=o=>{const{dragOverTargetIds:i}=this;if(this.dragOverTargetIds=[],!this.monitor.isDragging()){o.preventDefault(),o.dataTransfer&&(o.dataTransfer.dropEffect="none");return}this.altKeyPressed=o.altKey,this.lastClientOffset=lp(o),this.scheduleHover(i),(i||[]).some(s=>this.monitor.canDropOnTarget(s))?(o.preventDefault(),o.dataTransfer&&(o.dataTransfer.dropEffect=this.getCurrentDropEffect())):this.isDraggingNativeItem()?o.preventDefault():(o.preventDefault(),o.dataTransfer&&(o.dataTransfer.dropEffect="none"))},this.handleTopDragLeaveCapture=o=>{this.isDraggingNativeItem()&&o.preventDefault(),this.enterLeaveCounter.leave(o.target)&&(this.isDraggingNativeItem()&&setTimeout(()=>this.endDragNativeItem(),0),this.cancelHover())},this.handleTopDropCapture=o=>{if(this.dropTargetIds=[],this.isDraggingNativeItem()){var i;o.preventDefault(),(i=this.currentNativeSource)===null||i===void 0||i.loadDataTransfer(o.dataTransfer)}else c0(o.dataTransfer)&&o.preventDefault();this.enterLeaveCounter.reset()},this.handleTopDrop=o=>{const{dropTargetIds:i}=this;this.dropTargetIds=[],this.actions.hover(i,{clientOffset:lp(o)}),this.actions.drop({dropEffect:this.getCurrentDropEffect()}),this.isDraggingNativeItem()?this.endDragNativeItem():this.monitor.isDragging()&&this.actions.endDrag(),this.cancelHover()},this.handleSelectStart=o=>{const i=o.target;typeof i.dragDrop=="function"&&(i.tagName==="INPUT"||i.tagName==="SELECT"||i.tagName==="TEXTAREA"||i.isContentEditable||(o.preventDefault(),i.dragDrop()))},this.options=new Jee(n,r),this.actions=t.getActions(),this.monitor=t.getMonitor(),this.registry=t.getRegistry(),this.enterLeaveCounter=new Uee(this.isNodeInDocument)}}let cp;function nte(){return cp||(cp=new Image,cp.src="data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw=="),cp}const rte=function(t,n,r){return new tte(t,n,r)};var Es=[],ote=function(){return Es.some(function(e){return e.activeTargets.length>0})},ite=function(){return Es.some(function(e){return e.skippedTargets.length>0})},UO="ResizeObserver loop completed with undelivered notifications.",ate=function(){var e;typeof ErrorEvent=="function"?e=new ErrorEvent("error",{message:UO}):(e=document.createEvent("Event"),e.initEvent("error",!1,!1),e.message=UO),window.dispatchEvent(e)},sd;(function(e){e.BORDER_BOX="border-box",e.CONTENT_BOX="content-box",e.DEVICE_PIXEL_CONTENT_BOX="device-pixel-content-box"})(sd||(sd={}));var ks=function(e){return Object.freeze(e)},ste=function(){function e(t,n){this.inlineSize=t,this.blockSize=n,ks(this)}return e}(),iR=function(){function e(t,n,r,o){return this.x=t,this.y=n,this.width=r,this.height=o,this.top=this.y,this.left=this.x,this.bottom=this.top+this.height,this.right=this.left+this.width,ks(this)}return e.prototype.toJSON=function(){var t=this,n=t.x,r=t.y,o=t.top,i=t.right,a=t.bottom,s=t.left,c=t.width,u=t.height;return{x:n,y:r,top:o,right:i,bottom:a,left:s,width:c,height:u}},e.fromRect=function(t){return new e(t.x,t.y,t.width,t.height)},e}(),v1=function(e){return e instanceof SVGElement&&"getBBox"in e},aR=function(e){if(v1(e)){var t=e.getBBox(),n=t.width,r=t.height;return!n&&!r}var o=e,i=o.offsetWidth,a=o.offsetHeight;return!(i||a||e.getClientRects().length)},KO=function(e){var t;if(e instanceof Element)return!0;var n=(t=e==null?void 0:e.ownerDocument)===null||t===void 0?void 0:t.defaultView;return!!(n&&e instanceof n.Element)},lte=function(e){switch(e.tagName){case"INPUT":if(e.type!=="image")break;case"VIDEO":case"AUDIO":case"EMBED":case"OBJECT":case"CANVAS":case"IFRAME":case"IMG":return!0}return!1},Tu=typeof window<"u"?window:{},up=new WeakMap,qO=/auto|scroll/,cte=/^tb|vertical/,ute=/msie|trident/i.test(Tu.navigator&&Tu.navigator.userAgent),Oi=function(e){return parseFloat(e||"0")},Dl=function(e,t,n){return e===void 0&&(e=0),t===void 0&&(t=0),n===void 0&&(n=!1),new ste((n?t:e)||0,(n?e:t)||0)},XO=ks({devicePixelContentBoxSize:Dl(),borderBoxSize:Dl(),contentBoxSize:Dl(),contentRect:new iR(0,0,0,0)}),sR=function(e,t){if(t===void 0&&(t=!1),up.has(e)&&!t)return up.get(e);if(aR(e))return up.set(e,XO),XO;var n=getComputedStyle(e),r=v1(e)&&e.ownerSVGElement&&e.getBBox(),o=!ute&&n.boxSizing==="border-box",i=cte.test(n.writingMode||""),a=!r&&qO.test(n.overflowY||""),s=!r&&qO.test(n.overflowX||""),c=r?0:Oi(n.paddingTop),u=r?0:Oi(n.paddingRight),p=r?0:Oi(n.paddingBottom),v=r?0:Oi(n.paddingLeft),h=r?0:Oi(n.borderTopWidth),m=r?0:Oi(n.borderRightWidth),b=r?0:Oi(n.borderBottomWidth),y=r?0:Oi(n.borderLeftWidth),w=v+u,C=c+p,S=y+m,E=h+b,k=s?e.offsetHeight-E-e.clientHeight:0,O=a?e.offsetWidth-S-e.clientWidth:0,$=o?w+S:0,T=o?C+E:0,M=r?r.width:Oi(n.width)-$-O,P=r?r.height:Oi(n.height)-T-k,R=M+w+O+S,A=P+C+k+E,V=ks({devicePixelContentBoxSize:Dl(Math.round(M*devicePixelRatio),Math.round(P*devicePixelRatio),i),borderBoxSize:Dl(R,A,i),contentBoxSize:Dl(M,P,i),contentRect:new iR(v,c,M,P)});return up.set(e,V),V},lR=function(e,t,n){var r=sR(e,n),o=r.borderBoxSize,i=r.contentBoxSize,a=r.devicePixelContentBoxSize;switch(t){case sd.DEVICE_PIXEL_CONTENT_BOX:return a;case sd.BORDER_BOX:return o;default:return i}},dte=function(){function e(t){var n=sR(t);this.target=t,this.contentRect=n.contentRect,this.borderBoxSize=ks([n.borderBoxSize]),this.contentBoxSize=ks([n.contentBoxSize]),this.devicePixelContentBoxSize=ks([n.devicePixelContentBoxSize])}return e}(),cR=function(e){if(aR(e))return 1/0;for(var t=0,n=e.parentNode;n;)t+=1,n=n.parentNode;return t},fte=function(){var e=1/0,t=[];Es.forEach(function(a){if(a.activeTargets.length!==0){var s=[];a.activeTargets.forEach(function(u){var p=new dte(u.target),v=cR(u.target);s.push(p),u.lastReportedSize=lR(u.target,u.observedBox),ve?n.activeTargets.push(o):n.skippedTargets.push(o))})})},pte=function(){var e=0;for(GO(e);ote();)e=fte(),GO(e);return ite()&&ate(),e>0},u0,uR=[],vte=function(){return uR.splice(0).forEach(function(e){return e()})},hte=function(e){if(!u0){var t=0,n=document.createTextNode(""),r={characterData:!0};new MutationObserver(function(){return vte()}).observe(n,r),u0=function(){n.textContent="".concat(t?t--:t++)}}uR.push(e),u0()},gte=function(e){hte(function(){requestAnimationFrame(e)})},Bp=0,mte=function(){return!!Bp},bte=250,yte={attributes:!0,characterData:!0,childList:!0,subtree:!0},YO=["resize","load","transitionend","animationend","animationstart","animationiteration","keyup","keydown","mouseup","mousedown","mouseover","mouseout","blur","focus"],QO=function(e){return e===void 0&&(e=0),Date.now()+e},d0=!1,wte=function(){function e(){var t=this;this.stopped=!0,this.listener=function(){return t.schedule()}}return e.prototype.run=function(t){var n=this;if(t===void 0&&(t=bte),!d0){d0=!0;var r=QO(t);gte(function(){var o=!1;try{o=pte()}finally{if(d0=!1,t=r-QO(),!mte())return;o?n.run(1e3):t>0?n.run(t):n.start()}})}},e.prototype.schedule=function(){this.stop(),this.run()},e.prototype.observe=function(){var t=this,n=function(){return t.observer&&t.observer.observe(document.body,yte)};document.body?n():Tu.addEventListener("DOMContentLoaded",n)},e.prototype.start=function(){var t=this;this.stopped&&(this.stopped=!1,this.observer=new MutationObserver(this.listener),this.observe(),YO.forEach(function(n){return Tu.addEventListener(n,t.listener,!0)}))},e.prototype.stop=function(){var t=this;this.stopped||(this.observer&&this.observer.disconnect(),YO.forEach(function(n){return Tu.removeEventListener(n,t.listener,!0)}),this.stopped=!0)},e}(),ry=new wte,ZO=function(e){!Bp&&e>0&&ry.start(),Bp+=e,!Bp&&ry.stop()},xte=function(e){return!v1(e)&&!lte(e)&&getComputedStyle(e).display==="inline"},Ste=function(){function e(t,n){this.target=t,this.observedBox=n||sd.CONTENT_BOX,this.lastReportedSize={inlineSize:0,blockSize:0}}return e.prototype.isActive=function(){var t=lR(this.target,this.observedBox,!0);return xte(this.target)&&(this.lastReportedSize=t),this.lastReportedSize.inlineSize!==t.inlineSize||this.lastReportedSize.blockSize!==t.blockSize},e}(),Cte=function(){function e(t,n){this.activeTargets=[],this.skippedTargets=[],this.observationTargets=[],this.observer=t,this.callback=n}return e}(),dp=new WeakMap,JO=function(e,t){for(var n=0;n=0&&(i&&Es.splice(Es.indexOf(r),1),r.observationTargets.splice(o,1),ZO(-1))},e.disconnect=function(t){var n=this,r=dp.get(t);r.observationTargets.slice().forEach(function(o){return n.unobserve(t,o.target)}),r.activeTargets.splice(0,r.activeTargets.length)},e}(),Ete=function(){function e(t){if(arguments.length===0)throw new TypeError("Failed to construct 'ResizeObserver': 1 argument required, but only 0 present.");if(typeof t!="function")throw new TypeError("Failed to construct 'ResizeObserver': The callback provided as parameter 1 is not a function.");fp.connect(this,t)}return e.prototype.observe=function(t,n){if(arguments.length===0)throw new TypeError("Failed to execute 'observe' on 'ResizeObserver': 1 argument required, but only 0 present.");if(!KO(t))throw new TypeError("Failed to execute 'observe' on 'ResizeObserver': parameter 1 is not of type 'Element");fp.observe(this,t,n)},e.prototype.unobserve=function(t){if(arguments.length===0)throw new TypeError("Failed to execute 'unobserve' on 'ResizeObserver': 1 argument required, but only 0 present.");if(!KO(t))throw new TypeError("Failed to execute 'unobserve' on 'ResizeObserver': parameter 1 is not of type 'Element");fp.unobserve(this,t)},e.prototype.disconnect=function(){fp.disconnect(this)},e.toString=function(){return"function ResizeObserver () { [polyfill code] }"},e}(),oy=function(e,t){return oy=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(n,r){n.__proto__=r}||function(n,r){for(var o in r)Object.prototype.hasOwnProperty.call(r,o)&&(n[o]=r[o])},oy(e,t)};function dR(e,t){if(typeof t!="function"&&t!==null)throw new TypeError("Class extends value "+String(t)+" is not a constructor or null");oy(e,t);function n(){this.constructor=e}e.prototype=t===null?Object.create(t):(n.prototype=t.prototype,new n)}var nt=function(){return nt=Object.assign||function(t){for(var n,r=1,o=arguments.length;r0)&&!(o=r.next()).done;)i.push(o.value)}catch(s){a={error:s}}finally{try{o&&!o.done&&(n=r.return)&&n.call(r)}finally{if(a)throw a.error}}return i}function ji(e,t,n){if(arguments.length===2)for(var r=0,o=t.length,i;r"u"||process.env===void 0?kte:"production",Qi=function(e){return{isEnabled:function(t){return e.some(function(n){return!!t[n]})}}},ld={measureLayout:Qi(["layout","layoutId","drag"]),animation:Qi(["animate","exit","variants","whileHover","whileTap","whileFocus","whileDrag","whileInView"]),exit:Qi(["exit"]),drag:Qi(["drag","dragControls"]),focus:Qi(["whileFocus"]),hover:Qi(["whileHover","onHoverStart","onHoverEnd"]),tap:Qi(["whileTap","onTap","onTapStart","onTapCancel"]),pan:Qi(["onPan","onPanStart","onPanSessionStart","onPanEnd"]),inView:Qi(["whileInView","onViewportEnter","onViewportLeave"])};function Ote(e){for(var t in e)e[t]!==null&&(t==="projectionNodeConstructor"?ld.projectionNodeConstructor=e[t]:ld[t].Component=e[t])}var cd=function(){},pR=d.createContext({strict:!1}),vR=Object.keys(ld),$te=vR.length;function Ite(e,t,n){var r=[],o=d.useContext(pR);if(!t)return null;fR!=="production"&&n&&o.strict;for(var i=0;i<$te;i++){var a=vR[i],s=ld[a],c=s.isEnabled,u=s.Component;c(e)&&u&&r.push(d.createElement(u,nt({key:a},e,{visualElement:t})))}return r}var h1=d.createContext({transformPagePoint:function(e){return e},isStatic:!1,reducedMotion:"never"}),vh=d.createContext({});function Tte(){return d.useContext(vh).visualElement}var hh=d.createContext(null),wc=typeof document<"u",e$=wc?d.useLayoutEffect:d.useEffect,iy={current:null},hR=!1;function Pte(){if(hR=!0,!!wc)if(window.matchMedia){var e=window.matchMedia("(prefers-reduced-motion)"),t=function(){return iy.current=e.matches};e.addListener(t),t()}else iy.current=!1}function Mte(){!hR&&Pte();var e=wr(d.useState(iy.current),1),t=e[0];return t}function Nte(){var e=Mte(),t=d.useContext(h1).reducedMotion;return t==="never"?!1:t==="always"?!0:e}function Rte(e,t,n,r){var o=d.useContext(pR),i=Tte(),a=d.useContext(hh),s=Nte(),c=d.useRef(void 0);r||(r=o.renderer),!c.current&&r&&(c.current=r(e,{visualState:t,parent:i,props:n,presenceId:a==null?void 0:a.id,blockInitialAnimation:(a==null?void 0:a.initial)===!1,shouldReduceMotion:s}));var u=c.current;return e$(function(){u==null||u.syncRender()}),d.useEffect(function(){var p;(p=u==null?void 0:u.animationState)===null||p===void 0||p.animateChanges()}),e$(function(){return function(){return u==null?void 0:u.notifyUnmount()}},[]),u}function Ol(e){return typeof e=="object"&&Object.prototype.hasOwnProperty.call(e,"current")}function Dte(e,t,n){return d.useCallback(function(r){var o;r&&((o=e.mount)===null||o===void 0||o.call(e,r)),t&&(r?t.mount(r):t.unmount()),n&&(typeof n=="function"?n(r):Ol(n)&&(n.current=r))},[t])}function gR(e){return Array.isArray(e)}function li(e){return typeof e=="string"||gR(e)}function jte(e){var t={};return e.forEachValue(function(n,r){return t[r]=n.get()}),t}function Lte(e){var t={};return e.forEachValue(function(n,r){return t[r]=n.getVelocity()}),t}function mR(e,t,n,r,o){var i;return r===void 0&&(r={}),o===void 0&&(o={}),typeof t=="function"&&(t=t(n??e.custom,r,o)),typeof t=="string"&&(t=(i=e.variants)===null||i===void 0?void 0:i[t]),typeof t=="function"&&(t=t(n??e.custom,r,o)),t}function gh(e,t,n){var r=e.getProps();return mR(r,t,n??r.custom,jte(e),Lte(e))}function mh(e){var t;return typeof((t=e.animate)===null||t===void 0?void 0:t.start)=="function"||li(e.initial)||li(e.animate)||li(e.whileHover)||li(e.whileDrag)||li(e.whileTap)||li(e.whileFocus)||li(e.exit)}function bR(e){return!!(mh(e)||e.variants)}function Bte(e,t){if(mh(e)){var n=e.initial,r=e.animate;return{initial:n===!1||li(n)?n:void 0,animate:li(r)?r:void 0}}return e.inherit!==!1?t:{}}function Ate(e){var t=Bte(e,d.useContext(vh)),n=t.initial,r=t.animate;return d.useMemo(function(){return{initial:n,animate:r}},[t$(n),t$(r)])}function t$(e){return Array.isArray(e)?e.join(" "):e}function bh(e){var t=d.useRef(null);return t.current===null&&(t.current=e()),t.current}var Pu={hasAnimatedSinceResize:!0,hasEverUpdated:!1},zte=1;function Hte(){return bh(function(){if(Pu.hasEverUpdated)return zte++})}var yR=d.createContext({}),wR=d.createContext({});function Fte(e,t,n,r){var o,i=t.layoutId,a=t.layout,s=t.drag,c=t.dragConstraints,u=t.layoutScroll,p=d.useContext(wR);!r||!n||n!=null&&n.projection||(n.projection=new r(e,n.getLatestValues(),(o=n.parent)===null||o===void 0?void 0:o.projection),n.projection.setOptions({layoutId:i,layout:a,alwaysMeasureLayout:!!s||c&&Ol(c),visualElement:n,scheduleRender:function(){return n.scheduleRender()},animationType:typeof a=="string"?a:"both",initialPromotionConfig:p,layoutScroll:u}))}var _te=function(e){dR(t,e);function t(){return e!==null&&e.apply(this,arguments)||this}return t.prototype.getSnapshotBeforeUpdate=function(){return this.updateProps(),null},t.prototype.componentDidUpdate=function(){},t.prototype.updateProps=function(){var n=this.props,r=n.visualElement,o=n.props;r&&r.setProps(o)},t.prototype.render=function(){return this.props.children},t}(ue.Component);function Vte(e){var t=e.preloadedFeatures,n=e.createVisualElement,r=e.projectionNodeConstructor,o=e.useRender,i=e.useVisualState,a=e.Component;t&&Ote(t);function s(c,u){var p=Wte(c);c=nt(nt({},c),{layoutId:p});var v=d.useContext(h1),h=null,m=Ate(c),b=v.isStatic?void 0:Hte(),y=i(c,v.isStatic);return!v.isStatic&&wc&&(m.visualElement=Rte(a,y,nt(nt({},v),c),n),Fte(b,c,m.visualElement,r||ld.projectionNodeConstructor),h=Ite(c,m.visualElement,t)),d.createElement(_te,{visualElement:m.visualElement,props:nt(nt({},v),c)},h,d.createElement(vh.Provider,{value:m},o(a,c,b,Dte(y,m.visualElement,u),y,v.isStatic,m.visualElement)))}return d.forwardRef(s)}function Wte(e){var t,n=e.layoutId,r=(t=d.useContext(yR))===null||t===void 0?void 0:t.id;return r&&n!==void 0?r+"-"+n:n}function Ute(e){function t(r,o){return o===void 0&&(o={}),Vte(e(r,o))}if(typeof Proxy>"u")return t;var n=new Map;return new Proxy(t,{get:function(r,o){return n.has(o)||n.set(o,t(o)),n.get(o)}})}var Kte=["animate","circle","defs","desc","ellipse","g","image","line","filter","marker","mask","metadata","path","pattern","polygon","polyline","rect","stop","svg","switch","symbol","text","tspan","use","view"];function g1(e){return typeof e!="string"||e.includes("-")?!1:!!(Kte.indexOf(e)>-1||/[A-Z]/.test(e))}var pv={};function qte(e){Object.assign(pv,e)}var ay=["","X","Y","Z"],Xte=["translate","scale","rotate","skew"],ud=["transformPerspective","x","y","z"];Xte.forEach(function(e){return ay.forEach(function(t){return ud.push(e+t)})});function Gte(e,t){return ud.indexOf(e)-ud.indexOf(t)}var Yte=new Set(ud);function jd(e){return Yte.has(e)}var Qte=new Set(["originX","originY","originZ"]);function xR(e){return Qte.has(e)}function SR(e,t){var n=t.layout,r=t.layoutId;return jd(e)||xR(e)||(n||r!==void 0)&&(!!pv[e]||e==="opacity")}var ia=function(e){return!!(e!==null&&typeof e=="object"&&e.getVelocity)},Zte={x:"translateX",y:"translateY",z:"translateZ",transformPerspective:"perspective"};function Jte(e,t,n,r){var o=e.transform,i=e.transformKeys,a=t.enableHardwareAcceleration,s=a===void 0?!0:a,c=t.allowTransformNone,u=c===void 0?!0:c,p="";i.sort(Gte);for(var v=!1,h=i.length,m=0;mn=>Math.max(Math.min(n,t),e),Mu=e=>e%1?Number(e.toFixed(5)):e,dd=/(-)?([\d]*\.?[\d])+/g,sy=/(#[0-9a-f]{6}|#[0-9a-f]{3}|#(?:[0-9a-f]{2}){2,4}|(rgb|hsl)a?\((-?[\d\.]+%?[,\s]+){2,3}\s*\/*\s*[\d\.]+%?\))/gi,nne=/^(#[0-9a-f]{3}|#(?:[0-9a-f]{2}){2,4}|(rgb|hsl)a?\((-?[\d\.]+%?[,\s]+){2,3}\s*\/*\s*[\d\.]+%?\))$/i;function Ld(e){return typeof e=="string"}const zs={test:e=>typeof e=="number",parse:parseFloat,transform:e=>e},Nu=Object.assign(Object.assign({},zs),{transform:ER(0,1)}),pp=Object.assign(Object.assign({},zs),{default:1}),Bd=e=>({test:t=>Ld(t)&&t.endsWith(e)&&t.split(" ").length===1,parse:parseFloat,transform:t=>`${t}${e}`}),Ra=Bd("deg"),Li=Bd("%"),Ut=Bd("px"),rne=Bd("vh"),one=Bd("vw"),n$=Object.assign(Object.assign({},Li),{parse:e=>Li.parse(e)/100,transform:e=>Li.transform(e*100)}),m1=(e,t)=>n=>!!(Ld(n)&&nne.test(n)&&n.startsWith(e)||t&&Object.prototype.hasOwnProperty.call(n,t)),kR=(e,t,n)=>r=>{if(!Ld(r))return r;const[o,i,a,s]=r.match(dd);return{[e]:parseFloat(o),[t]:parseFloat(i),[n]:parseFloat(a),alpha:s!==void 0?parseFloat(s):1}},xs={test:m1("hsl","hue"),parse:kR("hue","saturation","lightness"),transform:({hue:e,saturation:t,lightness:n,alpha:r=1})=>"hsla("+Math.round(e)+", "+Li.transform(Mu(t))+", "+Li.transform(Mu(n))+", "+Mu(Nu.transform(r))+")"},ine=ER(0,255),f0=Object.assign(Object.assign({},zs),{transform:e=>Math.round(ine(e))}),Fa={test:m1("rgb","red"),parse:kR("red","green","blue"),transform:({red:e,green:t,blue:n,alpha:r=1})=>"rgba("+f0.transform(e)+", "+f0.transform(t)+", "+f0.transform(n)+", "+Mu(Nu.transform(r))+")"};function ane(e){let t="",n="",r="",o="";return e.length>5?(t=e.substr(1,2),n=e.substr(3,2),r=e.substr(5,2),o=e.substr(7,2)):(t=e.substr(1,1),n=e.substr(2,1),r=e.substr(3,1),o=e.substr(4,1),t+=t,n+=n,r+=r,o+=o),{red:parseInt(t,16),green:parseInt(n,16),blue:parseInt(r,16),alpha:o?parseInt(o,16)/255:1}}const ly={test:m1("#"),parse:ane,transform:Fa.transform},qr={test:e=>Fa.test(e)||ly.test(e)||xs.test(e),parse:e=>Fa.test(e)?Fa.parse(e):xs.test(e)?xs.parse(e):ly.parse(e),transform:e=>Ld(e)?e:e.hasOwnProperty("red")?Fa.transform(e):xs.transform(e)},OR="${c}",$R="${n}";function sne(e){var t,n,r,o;return isNaN(e)&&Ld(e)&&((n=(t=e.match(dd))===null||t===void 0?void 0:t.length)!==null&&n!==void 0?n:0)+((o=(r=e.match(sy))===null||r===void 0?void 0:r.length)!==null&&o!==void 0?o:0)>0}function IR(e){typeof e=="number"&&(e=`${e}`);const t=[];let n=0;const r=e.match(sy);r&&(n=r.length,e=e.replace(sy,OR),t.push(...r.map(qr.parse)));const o=e.match(dd);return o&&(e=e.replace(dd,$R),t.push(...o.map(zs.parse))),{values:t,numColors:n,tokenised:e}}function TR(e){return IR(e).values}function PR(e){const{values:t,numColors:n,tokenised:r}=IR(e),o=t.length;return i=>{let a=r;for(let s=0;stypeof e=="number"?0:e;function cne(e){const t=TR(e);return PR(e)(t.map(lne))}const aa={test:sne,parse:TR,createTransformer:PR,getAnimatableNone:cne},une=new Set(["brightness","contrast","saturate","opacity"]);function dne(e){let[t,n]=e.slice(0,-1).split("(");if(t==="drop-shadow")return e;const[r]=n.match(dd)||[];if(!r)return e;const o=n.replace(r,"");let i=une.has(t)?1:0;return r!==n&&(i*=100),t+"("+i+o+")"}const fne=/([a-z-]*)\(.*?\)/g,cy=Object.assign(Object.assign({},aa),{getAnimatableNone:e=>{const t=e.match(fne);return t?t.map(dne).join(" "):e}});var r$=nt(nt({},zs),{transform:Math.round}),MR={borderWidth:Ut,borderTopWidth:Ut,borderRightWidth:Ut,borderBottomWidth:Ut,borderLeftWidth:Ut,borderRadius:Ut,radius:Ut,borderTopLeftRadius:Ut,borderTopRightRadius:Ut,borderBottomRightRadius:Ut,borderBottomLeftRadius:Ut,width:Ut,maxWidth:Ut,height:Ut,maxHeight:Ut,size:Ut,top:Ut,right:Ut,bottom:Ut,left:Ut,padding:Ut,paddingTop:Ut,paddingRight:Ut,paddingBottom:Ut,paddingLeft:Ut,margin:Ut,marginTop:Ut,marginRight:Ut,marginBottom:Ut,marginLeft:Ut,rotate:Ra,rotateX:Ra,rotateY:Ra,rotateZ:Ra,scale:pp,scaleX:pp,scaleY:pp,scaleZ:pp,skew:Ra,skewX:Ra,skewY:Ra,distance:Ut,translateX:Ut,translateY:Ut,translateZ:Ut,x:Ut,y:Ut,z:Ut,perspective:Ut,transformPerspective:Ut,opacity:Nu,originX:n$,originY:n$,originZ:Ut,zIndex:r$,fillOpacity:Nu,strokeOpacity:Nu,numOctaves:r$};function b1(e,t,n,r){var o,i=e.style,a=e.vars,s=e.transform,c=e.transformKeys,u=e.transformOrigin;c.length=0;var p=!1,v=!1,h=!0;for(var m in t){var b=t[m];if(CR(m)){a[m]=b;continue}var y=MR[m],w=tne(b,y);if(jd(m)){if(p=!0,s[m]=w,c.push(m),!h)continue;b!==((o=y.default)!==null&&o!==void 0?o:0)&&(h=!1)}else xR(m)?(u[m]=w,v=!0):i[m]=w}p?i.transform=Jte(e,n,h,r):r?i.transform=r({},""):!t.transform&&i.transform&&(i.transform="none"),v&&(i.transformOrigin=ene(u))}var y1=function(){return{style:{},transform:{},transformKeys:[],transformOrigin:{},vars:{}}};function NR(e,t,n){for(var r in t)!ia(t[r])&&!SR(r,n)&&(e[r]=t[r])}function pne(e,t,n){var r=e.transformTemplate;return d.useMemo(function(){var o=y1();b1(o,t,{enableHardwareAcceleration:!n},r);var i=o.vars,a=o.style;return nt(nt({},i),a)},[t])}function vne(e,t,n){var r=e.style||{},o={};return NR(o,r,e),Object.assign(o,pne(e,t,n)),e.transformValues&&(o=e.transformValues(o)),o}function hne(e,t,n){var r={},o=vne(e,t,n);return e.drag&&e.dragListener!==!1&&(r.draggable=!1,o.userSelect=o.WebkitUserSelect=o.WebkitTouchCallout="none",o.touchAction=e.drag===!0?"none":"pan-".concat(e.drag==="x"?"y":"x")),r.style=o,r}var gne=new Set(["initial","animate","exit","style","variants","transition","transformTemplate","transformValues","custom","inherit","layout","layoutId","layoutDependency","onLayoutAnimationStart","onLayoutAnimationComplete","onLayoutMeasure","onBeforeLayoutMeasure","onAnimationStart","onAnimationComplete","onUpdate","onDragStart","onDrag","onDragEnd","onMeasureDragConstraints","onDirectionLock","onDragTransitionEnd","drag","dragControls","dragListener","dragConstraints","dragDirectionLock","dragSnapToOrigin","_dragX","_dragY","dragElastic","dragMomentum","dragPropagation","dragTransition","whileDrag","onPan","onPanStart","onPanEnd","onPanSessionStart","onTap","onTapStart","onTapCancel","onHoverStart","onHoverEnd","whileFocus","whileTap","whileHover","whileInView","onViewportEnter","onViewportLeave","viewport","layoutScroll"]);function vv(e){return gne.has(e)}var RR=function(e){return!vv(e)};function mne(e){e&&(RR=function(t){return t.startsWith("on")?!vv(t):e(t)})}try{mne(require("@emotion/is-prop-valid").default)}catch{}function bne(e,t,n){var r={};for(var o in e)(RR(o)||n===!0&&vv(o)||!t&&!vv(o)||e.draggable&&o.startsWith("onDrag"))&&(r[o]=e[o]);return r}function o$(e,t,n){return typeof e=="string"?e:Ut.transform(t+n*e)}function yne(e,t,n){var r=o$(t,e.x,e.width),o=o$(n,e.y,e.height);return"".concat(r," ").concat(o)}var wne={offset:"strokeDashoffset",array:"strokeDasharray"};function xne(e,t,n,r,o){n===void 0&&(n=1),r===void 0&&(r=0),e.pathLength=1;var i=wne;e[i.offset]=Ut.transform(-r);var a=Ut.transform(t),s=Ut.transform(n);e[i.array]="".concat(a," ").concat(s)}function w1(e,t,n,r){var o=t.attrX,i=t.attrY,a=t.originX,s=t.originY,c=t.pathLength,u=t.pathSpacing,p=u===void 0?1:u,v=t.pathOffset,h=v===void 0?0:v,m=Yo(t,["attrX","attrY","originX","originY","pathLength","pathSpacing","pathOffset"]);b1(e,m,n,r),e.attrs=e.style,e.style={};var b=e.attrs,y=e.style,w=e.dimensions;b.transform&&(w&&(y.transform=b.transform),delete b.transform),w&&(a!==void 0||s!==void 0||y.transform)&&(y.transformOrigin=yne(w,a!==void 0?a:.5,s!==void 0?s:.5)),o!==void 0&&(b.x=o),i!==void 0&&(b.y=i),c!==void 0&&xne(b,c,p,h)}var DR=function(){return nt(nt({},y1()),{attrs:{}})};function Sne(e,t){var n=d.useMemo(function(){var o=DR();return w1(o,t,{enableHardwareAcceleration:!1},e.transformTemplate),nt(nt({},o.attrs),{style:nt({},o.style)})},[t]);if(e.style){var r={};NR(r,e.style,e),n.style=nt(nt({},r),n.style)}return n}function Cne(e){e===void 0&&(e=!1);var t=function(n,r,o,i,a,s){var c=a.latestValues,u=g1(n)?Sne:hne,p=u(r,c,s),v=bne(r,typeof n=="string",e),h=nt(nt(nt({},v),p),{ref:i});return o&&(h["data-projection-id"]=o),d.createElement(n,h)};return t}var Ene=/([a-z])([A-Z])/g,kne="$1-$2",jR=function(e){return e.replace(Ene,kne).toLowerCase()};function LR(e,t,n,r){var o=t.style,i=t.vars;Object.assign(e.style,o,r&&r.getProjectionStyles(n));for(var a in i)e.style.setProperty(a,i[a])}var BR=new Set(["baseFrequency","diffuseConstant","kernelMatrix","kernelUnitLength","keySplines","keyTimes","limitingConeAngle","markerHeight","markerWidth","numOctaves","targetX","targetY","surfaceScale","specularConstant","specularExponent","stdDeviation","tableValues","viewBox","gradientTransform","pathLength"]);function AR(e,t,n,r){LR(e,t,void 0,r);for(var o in t.attrs)e.setAttribute(BR.has(o)?o:jR(o),t.attrs[o])}function x1(e){var t=e.style,n={};for(var r in t)(ia(t[r])||SR(r,e))&&(n[r]=t[r]);return n}function zR(e){var t=x1(e);for(var n in e)if(ia(e[n])){var r=n==="x"||n==="y"?"attr"+n.toUpperCase():n;t[r]=e[n]}return t}function S1(e){return typeof e=="object"&&typeof e.start=="function"}var fd=function(e){return Array.isArray(e)},One=function(e){return!!(e&&typeof e=="object"&&e.mix&&e.toValue)},HR=function(e){return fd(e)?e[e.length-1]||0:e};function Ap(e){var t=ia(e)?e.get():e;return One(t)?t.toValue():t}function i$(e,t,n,r){var o=e.scrapeMotionValuesFromProps,i=e.createRenderState,a=e.onMount,s={latestValues:$ne(t,n,r,o),renderState:i()};return a&&(s.mount=function(c){return a(t,c,s)}),s}var FR=function(e){return function(t,n){var r=d.useContext(vh),o=d.useContext(hh);return n?i$(e,t,r,o):bh(function(){return i$(e,t,r,o)})}};function $ne(e,t,n,r){var o={},i=(n==null?void 0:n.initial)===!1,a=r(e);for(var s in a)o[s]=Ap(a[s]);var c=e.initial,u=e.animate,p=mh(e),v=bR(e);t&&v&&!p&&e.inherit!==!1&&(c??(c=t.initial),u??(u=t.animate));var h=i||c===!1,m=h?u:c;if(m&&typeof m!="boolean"&&!S1(m)){var b=Array.isArray(m)?m:[m];b.forEach(function(y){var w=mR(e,y);if(w){var C=w.transitionEnd;w.transition;var S=Yo(w,["transitionEnd","transition"]);for(var E in S){var k=S[E];if(Array.isArray(k)){var O=h?k.length-1:0;k=k[O]}k!==null&&(o[E]=k)}for(var E in C)o[E]=C[E]}})}return o}var Ine={useVisualState:FR({scrapeMotionValuesFromProps:zR,createRenderState:DR,onMount:function(e,t,n){var r=n.renderState,o=n.latestValues;try{r.dimensions=typeof t.getBBox=="function"?t.getBBox():t.getBoundingClientRect()}catch{r.dimensions={x:0,y:0,width:0,height:0}}w1(r,o,{enableHardwareAcceleration:!1},e.transformTemplate),AR(t,r)}})},Tne={useVisualState:FR({scrapeMotionValuesFromProps:x1,createRenderState:y1})};function Pne(e,t,n,r,o){var i=t.forwardMotionProps,a=i===void 0?!1:i,s=g1(e)?Ine:Tne;return nt(nt({},s),{preloadedFeatures:n,useRender:Cne(a),createVisualElement:r,projectionNodeConstructor:o,Component:e})}var Rn;(function(e){e.Animate="animate",e.Hover="whileHover",e.Tap="whileTap",e.Drag="whileDrag",e.Focus="whileFocus",e.InView="whileInView",e.Exit="exit"})(Rn||(Rn={}));function yh(e,t,n,r){return r===void 0&&(r={passive:!0}),e.addEventListener(t,n,r),function(){return e.removeEventListener(t,n)}}function uy(e,t,n,r){d.useEffect(function(){var o=e.current;if(n&&o)return yh(o,t,n,r)},[e,t,n,r])}function Mne(e){var t=e.whileFocus,n=e.visualElement,r=function(){var i;(i=n.animationState)===null||i===void 0||i.setActive(Rn.Focus,!0)},o=function(){var i;(i=n.animationState)===null||i===void 0||i.setActive(Rn.Focus,!1)};uy(n,"focus",t?r:void 0),uy(n,"blur",t?o:void 0)}function _R(e){return typeof PointerEvent<"u"&&e instanceof PointerEvent?e.pointerType==="mouse":e instanceof MouseEvent}function VR(e){var t=!!e.touches;return t}function Nne(e){return function(t){var n=t instanceof MouseEvent,r=!n||n&&t.button===0;r&&e(t)}}var Rne={pageX:0,pageY:0};function Dne(e,t){t===void 0&&(t="page");var n=e.touches[0]||e.changedTouches[0],r=n||Rne;return{x:r[t+"X"],y:r[t+"Y"]}}function jne(e,t){return t===void 0&&(t="page"),{x:e[t+"X"],y:e[t+"Y"]}}function C1(e,t){return t===void 0&&(t="page"),{point:VR(e)?Dne(e,t):jne(e,t)}}var WR=function(e,t){t===void 0&&(t=!1);var n=function(r){return e(r,C1(r))};return t?Nne(n):n},Lne=function(){return wc&&window.onpointerdown===null},Bne=function(){return wc&&window.ontouchstart===null},Ane=function(){return wc&&window.onmousedown===null},zne={pointerdown:"mousedown",pointermove:"mousemove",pointerup:"mouseup",pointercancel:"mousecancel",pointerover:"mouseover",pointerout:"mouseout",pointerenter:"mouseenter",pointerleave:"mouseleave"},Hne={pointerdown:"touchstart",pointermove:"touchmove",pointerup:"touchend",pointercancel:"touchcancel"};function UR(e){return Lne()?e:Bne()?Hne[e]:Ane()?zne[e]:e}function jl(e,t,n,r){return yh(e,UR(t),WR(n,t==="pointerdown"),r)}function hv(e,t,n,r){return uy(e,UR(t),n&&WR(n,t==="pointerdown"),r)}function KR(e){var t=null;return function(){var n=function(){t=null};return t===null?(t=e,n):!1}}var a$=KR("dragHorizontal"),s$=KR("dragVertical");function qR(e){var t=!1;if(e==="y")t=s$();else if(e==="x")t=a$();else{var n=a$(),r=s$();n&&r?t=function(){n(),r()}:(n&&n(),r&&r())}return t}function XR(){var e=qR(!0);return e?(e(),!1):!0}function l$(e,t,n){return function(r,o){var i;!_R(r)||XR()||((i=e.animationState)===null||i===void 0||i.setActive(Rn.Hover,t),n==null||n(r,o))}}function Fne(e){var t=e.onHoverStart,n=e.onHoverEnd,r=e.whileHover,o=e.visualElement;hv(o,"pointerenter",t||r?l$(o,!0,t):void 0,{passive:!t}),hv(o,"pointerleave",n||r?l$(o,!1,n):void 0,{passive:!n})}var GR=function(e,t){return t?e===t?!0:GR(e,t.parentElement):!1};function YR(e){return d.useEffect(function(){return function(){return e()}},[])}const gv=(e,t,n)=>Math.min(Math.max(n,e),t),p0=.001,_ne=.01,Vne=10,Wne=.05,Une=1;function Kne({duration:e=800,bounce:t=.25,velocity:n=0,mass:r=1}){let o,i,a=1-t;a=gv(Wne,Une,a),e=gv(_ne,Vne,e/1e3),a<1?(o=u=>{const p=u*a,v=p*e,h=p-n,m=dy(u,a),b=Math.exp(-v);return p0-h/m*b},i=u=>{const v=u*a*e,h=v*n+n,m=Math.pow(a,2)*Math.pow(u,2)*e,b=Math.exp(-v),y=dy(Math.pow(u,2),a);return(-o(u)+p0>0?-1:1)*((h-m)*b)/y}):(o=u=>{const p=Math.exp(-u*e),v=(u-n)*e+1;return-p0+p*v},i=u=>{const p=Math.exp(-u*e),v=(n-u)*(e*e);return p*v});const s=5/e,c=Xne(o,i,s);if(e*=1e3,isNaN(c))return{stiffness:100,damping:10,duration:e};{const u=Math.pow(c,2)*r;return{stiffness:u,damping:a*2*Math.sqrt(r*u),duration:e}}}const qne=12;function Xne(e,t,n){let r=n;for(let o=1;oe[n]!==void 0)}function Qne(e){let t=Object.assign({velocity:0,stiffness:100,damping:10,mass:1,isResolvedFromDuration:!1},e);if(!c$(e,Yne)&&c$(e,Gne)){const n=Kne(e);t=Object.assign(Object.assign(Object.assign({},t),n),{velocity:0,mass:1}),t.isResolvedFromDuration=!0}return t}function E1(e){var{from:t=0,to:n=1,restSpeed:r=2,restDelta:o}=e,i=Yo(e,["from","to","restSpeed","restDelta"]);const a={done:!1,value:t};let{stiffness:s,damping:c,mass:u,velocity:p,duration:v,isResolvedFromDuration:h}=Qne(i),m=u$,b=u$;function y(){const w=p?-p/1e3:0,C=n-t,S=c/(2*Math.sqrt(s*u)),E=Math.sqrt(s/u)/1e3;if(o===void 0&&(o=Math.min(Math.abs(n-t)/100,.4)),S<1){const k=dy(E,S);m=O=>{const $=Math.exp(-S*E*O);return n-$*((w+S*E*C)/k*Math.sin(k*O)+C*Math.cos(k*O))},b=O=>{const $=Math.exp(-S*E*O);return S*E*$*(Math.sin(k*O)*(w+S*E*C)/k+C*Math.cos(k*O))-$*(Math.cos(k*O)*(w+S*E*C)-k*C*Math.sin(k*O))}}else if(S===1)m=k=>n-Math.exp(-E*k)*(C+(w+E*C)*k);else{const k=E*Math.sqrt(S*S-1);m=O=>{const $=Math.exp(-S*E*O),T=Math.min(k*O,300);return n-$*((w+S*E*C)*Math.sinh(T)+k*C*Math.cosh(T))/k}}}return y(),{next:w=>{const C=m(w);if(h)a.done=w>=v;else{const S=b(w)*1e3,E=Math.abs(S)<=r,k=Math.abs(n-C)<=o;a.done=E&&k}return a.value=a.done?n:C,a},flipTarget:()=>{p=-p,[t,n]=[n,t],y()}}}E1.needsInterpolation=(e,t)=>typeof e=="string"||typeof t=="string";const u$=e=>0,pd=(e,t,n)=>{const r=t-e;return r===0?1:(n-e)/r},er=(e,t,n)=>-n*e+n*t+e;function v0(e,t,n){return n<0&&(n+=1),n>1&&(n-=1),n<1/6?e+(t-e)*6*n:n<1/2?t:n<2/3?e+(t-e)*(2/3-n)*6:e}function d$({hue:e,saturation:t,lightness:n,alpha:r}){e/=360,t/=100,n/=100;let o=0,i=0,a=0;if(!t)o=i=a=n;else{const s=n<.5?n*(1+t):n+t-n*t,c=2*n-s;o=v0(c,s,e+1/3),i=v0(c,s,e),a=v0(c,s,e-1/3)}return{red:Math.round(o*255),green:Math.round(i*255),blue:Math.round(a*255),alpha:r}}const Zne=(e,t,n)=>{const r=e*e,o=t*t;return Math.sqrt(Math.max(0,n*(o-r)+r))},Jne=[ly,Fa,xs],f$=e=>Jne.find(t=>t.test(e)),QR=(e,t)=>{let n=f$(e),r=f$(t),o=n.parse(e),i=r.parse(t);n===xs&&(o=d$(o),n=Fa),r===xs&&(i=d$(i),r=Fa);const a=Object.assign({},o);return s=>{for(const c in a)c!=="alpha"&&(a[c]=Zne(o[c],i[c],s));return a.alpha=er(o.alpha,i.alpha,s),n.transform(a)}},fy=e=>typeof e=="number",ere=(e,t)=>n=>t(e(n)),wh=(...e)=>e.reduce(ere);function ZR(e,t){return fy(e)?n=>er(e,t,n):qr.test(e)?QR(e,t):e4(e,t)}const JR=(e,t)=>{const n=[...e],r=n.length,o=e.map((i,a)=>ZR(i,t[a]));return i=>{for(let a=0;a{const n=Object.assign(Object.assign({},e),t),r={};for(const o in n)e[o]!==void 0&&t[o]!==void 0&&(r[o]=ZR(e[o],t[o]));return o=>{for(const i in r)n[i]=r[i](o);return n}};function p$(e){const t=aa.parse(e),n=t.length;let r=0,o=0,i=0;for(let a=0;a{const n=aa.createTransformer(t),r=p$(e),o=p$(t);return r.numHSL===o.numHSL&&r.numRGB===o.numRGB&&r.numNumbers>=o.numNumbers?wh(JR(r.parsed,o.parsed),n):a=>`${a>0?t:e}`},nre=(e,t)=>n=>er(e,t,n);function rre(e){if(typeof e=="number")return nre;if(typeof e=="string")return qr.test(e)?QR:e4;if(Array.isArray(e))return JR;if(typeof e=="object")return tre}function ore(e,t,n){const r=[],o=n||rre(e[0]),i=e.length-1;for(let a=0;an(pd(e,t,r))}function are(e,t){const n=e.length,r=n-1;return o=>{let i=0,a=!1;if(o<=e[0]?a=!0:o>=e[r]&&(i=r-1,a=!0),!a){let c=1;for(;co||c===r);c++);i=c-1}const s=pd(e[i],e[i+1],o);return t[i](s)}}function t4(e,t,{clamp:n=!0,ease:r,mixer:o}={}){const i=e.length;cd(i===t.length),cd(!r||!Array.isArray(r)||r.length===i-1),e[0]>e[i-1]&&(e=[].concat(e),t=[].concat(t),e.reverse(),t.reverse());const a=ore(t,r,o),s=i===2?ire(e,a):are(e,a);return n?c=>s(gv(e[0],e[i-1],c)):s}const xh=e=>t=>1-e(1-t),k1=e=>t=>t<=.5?e(2*t)/2:(2-e(2*(1-t)))/2,sre=e=>t=>Math.pow(t,e),n4=e=>t=>t*t*((e+1)*t-e),lre=e=>{const t=n4(e);return n=>(n*=2)<1?.5*t(n):.5*(2-Math.pow(2,-10*(n-1)))},r4=1.525,cre=4/11,ure=8/11,dre=9/10,O1=e=>e,$1=sre(2),fre=xh($1),o4=k1($1),i4=e=>1-Math.sin(Math.acos(e)),I1=xh(i4),pre=k1(I1),T1=n4(r4),vre=xh(T1),hre=k1(T1),gre=lre(r4),mre=4356/361,bre=35442/1805,yre=16061/1805,mv=e=>{if(e===1||e===0)return e;const t=e*e;return ee<.5?.5*(1-mv(1-e*2)):.5*mv(e*2-1)+.5;function Sre(e,t){return e.map(()=>t||o4).splice(0,e.length-1)}function Cre(e){const t=e.length;return e.map((n,r)=>r!==0?r/(t-1):0)}function Ere(e,t){return e.map(n=>n*t)}function zp({from:e=0,to:t=1,ease:n,offset:r,duration:o=300}){const i={done:!1,value:e},a=Array.isArray(t)?t:[e,t],s=Ere(r&&r.length===a.length?r:Cre(a),o);function c(){return t4(s,a,{ease:Array.isArray(n)?n:Sre(a,n)})}let u=c();return{next:p=>(i.value=u(p),i.done=p>=o,i),flipTarget:()=>{a.reverse(),u=c()}}}function kre({velocity:e=0,from:t=0,power:n=.8,timeConstant:r=350,restDelta:o=.5,modifyTarget:i}){const a={done:!1,value:t};let s=n*e;const c=t+s,u=i===void 0?c:i(c);return u!==c&&(s=u-t),{next:p=>{const v=-s*Math.exp(-p/r);return a.done=!(v>o||v<-o),a.value=a.done?u:u+v,a},flipTarget:()=>{}}}const v$={keyframes:zp,spring:E1,decay:kre};function Ore(e){if(Array.isArray(e.to))return zp;if(v$[e.type])return v$[e.type];const t=new Set(Object.keys(e));return t.has("ease")||t.has("duration")&&!t.has("dampingRatio")?zp:t.has("dampingRatio")||t.has("stiffness")||t.has("mass")||t.has("damping")||t.has("restSpeed")||t.has("restDelta")?E1:zp}const a4=1/60*1e3,$re=typeof performance<"u"?()=>performance.now():()=>Date.now(),s4=typeof window<"u"?e=>window.requestAnimationFrame(e):e=>setTimeout(()=>e($re()),a4);function Ire(e){let t=[],n=[],r=0,o=!1,i=!1;const a=new WeakSet,s={schedule:(c,u=!1,p=!1)=>{const v=p&&o,h=v?t:n;return u&&a.add(c),h.indexOf(c)===-1&&(h.push(c),v&&o&&(r=t.length)),c},cancel:c=>{const u=n.indexOf(c);u!==-1&&n.splice(u,1),a.delete(c)},process:c=>{if(o){i=!0;return}if(o=!0,[t,n]=[n,t],n.length=0,r=t.length,r)for(let u=0;u(e[t]=Ire(()=>vd=!0),e),{}),Bi=Ad.reduce((e,t)=>{const n=Sh[t];return e[t]=(r,o=!1,i=!1)=>(vd||Mre(),n.schedule(r,o,i)),e},{}),nc=Ad.reduce((e,t)=>(e[t]=Sh[t].cancel,e),{}),h0=Ad.reduce((e,t)=>(e[t]=()=>Sh[t].process(Ll),e),{}),Pre=e=>Sh[e].process(Ll),l4=e=>{vd=!1,Ll.delta=py?a4:Math.max(Math.min(e-Ll.timestamp,Tre),1),Ll.timestamp=e,vy=!0,Ad.forEach(Pre),vy=!1,vd&&(py=!1,s4(l4))},Mre=()=>{vd=!0,py=!0,vy||s4(l4)},bv=()=>Ll;function c4(e,t,n=0){return e-t-n}function Nre(e,t,n=0,r=!0){return r?c4(t+-e,t,n):t-(e-t)+n}function Rre(e,t,n,r){return r?e>=t+n:e<=-n}const Dre=e=>{const t=({delta:n})=>e(n);return{start:()=>Bi.update(t,!0),stop:()=>nc.update(t)}};function u4(e){var t,n,{from:r,autoplay:o=!0,driver:i=Dre,elapsed:a=0,repeat:s=0,repeatType:c="loop",repeatDelay:u=0,onPlay:p,onStop:v,onComplete:h,onRepeat:m,onUpdate:b}=e,y=Yo(e,["from","autoplay","driver","elapsed","repeat","repeatType","repeatDelay","onPlay","onStop","onComplete","onRepeat","onUpdate"]);let{to:w}=y,C,S=0,E=y.duration,k,O=!1,$=!0,T;const M=Ore(y);!((n=(t=M).needsInterpolation)===null||n===void 0)&&n.call(t,r,w)&&(T=t4([0,100],[r,w],{clamp:!1}),r=0,w=100);const P=M(Object.assign(Object.assign({},y),{from:r,to:w}));function R(){S++,c==="reverse"?($=S%2===0,a=Nre(a,E,u,$)):(a=c4(a,E,u),c==="mirror"&&P.flipTarget()),O=!1,m&&m()}function A(){C.stop(),h&&h()}function V(B){if($||(B=-B),a+=B,!O){const _=P.next(Math.max(0,a));k=_.value,T&&(k=T(k)),O=$?_.done:a<=0}b==null||b(k),O&&(S===0&&(E??(E=a)),S{v==null||v(),C.stop()}}}function d4(e,t){return t?e*(1e3/t):0}function jre({from:e=0,velocity:t=0,min:n,max:r,power:o=.8,timeConstant:i=750,bounceStiffness:a=500,bounceDamping:s=10,restDelta:c=1,modifyTarget:u,driver:p,onUpdate:v,onComplete:h,onStop:m}){let b;function y(E){return n!==void 0&&Er}function w(E){return n===void 0?r:r===void 0||Math.abs(n-E){var O;v==null||v(k),(O=E.onUpdate)===null||O===void 0||O.call(E,k)},onComplete:h,onStop:m}))}function S(E){C(Object.assign({type:"spring",stiffness:a,damping:s,restDelta:c},E))}if(y(e))S({from:e,velocity:t,to:w(e)});else{let E=o*t+e;typeof u<"u"&&(E=u(E));const k=w(E),O=k===n?-1:1;let $,T;const M=P=>{$=T,T=P,t=d4(P-$,bv().delta),(O===1&&P>k||O===-1&&Pb==null?void 0:b.stop()}}const hy=e=>e.hasOwnProperty("x")&&e.hasOwnProperty("y"),h$=e=>hy(e)&&e.hasOwnProperty("z"),vp=(e,t)=>Math.abs(e-t);function f4(e,t){if(fy(e)&&fy(t))return vp(e,t);if(hy(e)&&hy(t)){const n=vp(e.x,t.x),r=vp(e.y,t.y),o=h$(e)&&h$(t)?vp(e.z,t.z):0;return Math.sqrt(Math.pow(n,2)+Math.pow(r,2)+Math.pow(o,2))}}const p4=(e,t)=>1-3*t+3*e,v4=(e,t)=>3*t-6*e,h4=e=>3*e,yv=(e,t,n)=>((p4(t,n)*e+v4(t,n))*e+h4(t))*e,g4=(e,t,n)=>3*p4(t,n)*e*e+2*v4(t,n)*e+h4(t),Lre=1e-7,Bre=10;function Are(e,t,n,r,o){let i,a,s=0;do a=t+(n-t)/2,i=yv(a,r,o)-e,i>0?n=a:t=a;while(Math.abs(i)>Lre&&++s=Hre?Fre(a,v,e,n):h===0?v:Are(a,s,s+hp,e,n)}return a=>a===0||a===1?a:yv(i(a),t,r)}function Vre(e){var t=e.onTap,n=e.onTapStart,r=e.onTapCancel,o=e.whileTap,i=e.visualElement,a=t||n||r||o,s=d.useRef(!1),c=d.useRef(null),u={passive:!(n||t||r||b)};function p(){var y;(y=c.current)===null||y===void 0||y.call(c),c.current=null}function v(){var y;return p(),s.current=!1,(y=i.animationState)===null||y===void 0||y.setActive(Rn.Tap,!1),!XR()}function h(y,w){v()&&(GR(i.getInstance(),y.target)?t==null||t(y,w):r==null||r(y,w))}function m(y,w){v()&&(r==null||r(y,w))}function b(y,w){var C;p(),!s.current&&(s.current=!0,c.current=wh(jl(window,"pointerup",h,u),jl(window,"pointercancel",m,u)),(C=i.animationState)===null||C===void 0||C.setActive(Rn.Tap,!0),n==null||n(y,w))}hv(i,"pointerdown",a?b:void 0,u),YR(p)}var g$=new Set;function Wre(e,t,n){g$.has(t)||g$.add(t)}var gy=new WeakMap,g0=new WeakMap,Ure=function(e){var t;(t=gy.get(e.target))===null||t===void 0||t(e)},Kre=function(e){e.forEach(Ure)};function qre(e){var t=e.root,n=Yo(e,["root"]),r=t||document;g0.has(r)||g0.set(r,{});var o=g0.get(r),i=JSON.stringify(n);return o[i]||(o[i]=new IntersectionObserver(Kre,nt({root:t},n))),o[i]}function Xre(e,t,n){var r=qre(t);return gy.set(e,n),r.observe(e),function(){gy.delete(e),r.unobserve(e)}}function Gre(e){var t=e.visualElement,n=e.whileInView,r=e.onViewportEnter,o=e.onViewportLeave,i=e.viewport,a=i===void 0?{}:i,s=d.useRef({hasEnteredView:!1,isInView:!1}),c=!!(n||r||o);a.once&&s.current.hasEnteredView&&(c=!1);var u=typeof IntersectionObserver>"u"?Zre:Qre;u(c,s.current,t,a)}var Yre={some:0,all:1};function Qre(e,t,n,r){var o=r.root,i=r.margin,a=r.amount,s=a===void 0?"some":a,c=r.once;d.useEffect(function(){if(e){var u={root:o==null?void 0:o.current,rootMargin:i,threshold:typeof s=="number"?s:Yre[s]},p=function(v){var h,m=v.isIntersecting;if(t.isInView!==m&&(t.isInView=m,!(c&&!m&&t.hasEnteredView))){m&&(t.hasEnteredView=!0),(h=n.animationState)===null||h===void 0||h.setActive(Rn.InView,m);var b=n.getProps(),y=m?b.onViewportEnter:b.onViewportLeave;y==null||y(v)}};return Xre(n.getInstance(),u,p)}},[e,o,i,s])}function Zre(e,t,n,r){var o=r.fallback,i=o===void 0?!0:o;d.useEffect(function(){!e||!i||(fR!=="production"&&Wre(!1,"IntersectionObserver not available on this device. whileInView animations will trigger on mount."),requestAnimationFrame(function(){var a;t.hasEnteredView=!0;var s=n.getProps().onViewportEnter;s==null||s(null),(a=n.animationState)===null||a===void 0||a.setActive(Rn.InView,!0)}))},[e])}var _a=function(e){return function(t){return e(t),null}},Jre={inView:_a(Gre),tap:_a(Vre),focus:_a(Mne),hover:_a(Fne)},eoe=0,toe=function(){return eoe++},noe=function(){return bh(toe)};function m4(){var e=d.useContext(hh);if(e===null)return[!0,null];var t=e.isPresent,n=e.onExitComplete,r=e.register,o=noe();d.useEffect(function(){return r(o)},[]);var i=function(){return n==null?void 0:n(o)};return!t&&n?[!1,i]:[!0]}function b4(e,t){if(!Array.isArray(t))return!1;var n=t.length;if(n!==e.length)return!1;for(var r=0;r-1&&e.splice(n,1)}var Ru=function(){function e(){this.subscriptions=[]}return e.prototype.add=function(t){var n=this;return D1(this.subscriptions,t),function(){return j1(n.subscriptions,t)}},e.prototype.notify=function(t,n,r){var o=this.subscriptions.length;if(o)if(o===1)this.subscriptions[0](t,n,r);else for(var i=0;iS&&A,H=Array.isArray(R)?R:[R],j=H.reduce(i,{});V===!1&&(j={});var L=P.prevResolvedValues,F=L===void 0?{}:L,U=nt(nt({},F),j),D=function(J){_=!0,w.delete(J),P.needsAnimating[J]=!0};for(var W in U){var G=j[W],q=F[W];C.hasOwnProperty(W)||(G!==q?fd(G)&&fd(q)?!b4(G,q)||B?D(W):P.protectedKeys[W]=!0:G!==void 0?D(W):w.add(W):G!==void 0&&w.has(W)?D(W):P.protectedKeys[W]=!0)}P.prevProp=R,P.prevResolvedValues=j,P.isActive&&(C=nt(nt({},C),j)),o&&e.blockInitialAnimation&&(_=!1),_&&!z&&y.push.apply(y,ji([],wr(H.map(function(J){return{animation:J,options:nt({type:M},p)}})),!1))},k=0;k=3;if(!(!m&&!b)){var y=h.point,w=bv().timestamp;o.history.push(nt(nt({},y),{timestamp:w}));var C=o.handlers,S=C.onStart,E=C.onMove;m||(S&&S(o.lastMoveEvent,h),o.startEvent=o.lastMoveEvent),E&&E(o.lastMoveEvent,h)}}},this.handlePointerMove=function(h,m){if(o.lastMoveEvent=h,o.lastMoveEventInfo=b0(m,o.transformPagePoint),_R(h)&&h.buttons===0){o.handlePointerUp(h,m);return}Bi.update(o.updatePoint,!0)},this.handlePointerUp=function(h,m){o.end();var b=o.handlers,y=b.onEnd,w=b.onSessionEnd,C=y0(b0(m,o.transformPagePoint),o.history);o.startEvent&&y&&y(h,C),w&&w(h,C)},!(VR(t)&&t.touches.length>1)){this.handlers=n,this.transformPagePoint=a;var s=C1(t),c=b0(s,this.transformPagePoint),u=c.point,p=bv().timestamp;this.history=[nt(nt({},u),{timestamp:p})];var v=n.onSessionStart;v&&v(t,y0(c,this.history)),this.removeListeners=wh(jl(window,"pointermove",this.handlePointerMove),jl(window,"pointerup",this.handlePointerUp),jl(window,"pointercancel",this.handlePointerUp))}}return e.prototype.updateHandlers=function(t){this.handlers=t},e.prototype.end=function(){this.removeListeners&&this.removeListeners(),nc.update(this.updatePoint)},e}();function b0(e,t){return t?{point:t(e.point)}:e}function C$(e,t){return{x:e.x-t.x,y:e.y-t.y}}function y0(e,t){var n=e.point;return{point:n,delta:C$(n,C4(t)),offset:C$(n,Loe(t)),velocity:Boe(t,.1)}}function Loe(e){return e[0]}function C4(e){return e[e.length-1]}function Boe(e,t){if(e.length<2)return{x:0,y:0};for(var n=e.length-1,r=null,o=C4(e);n>=0&&(r=e[n],!(o.timestamp-r.timestamp>wv(t)));)n--;if(!r)return{x:0,y:0};var i=(o.timestamp-r.timestamp)/1e3;if(i===0)return{x:0,y:0};var a={x:(o.x-r.x)/i,y:(o.y-r.y)/i};return a.x===1/0&&(a.x=0),a.y===1/0&&(a.y=0),a}function sa(e){return e.max-e.min}function E$(e,t,n){return t===void 0&&(t=0),n===void 0&&(n=.01),f4(e,t)o&&(e=n?er(o,e,n.max):Math.min(e,o)),e}function I$(e,t,n){return{min:t!==void 0?e.min+t:void 0,max:n!==void 0?e.max+n-(e.max-e.min):void 0}}function Hoe(e,t){var n=t.top,r=t.left,o=t.bottom,i=t.right;return{x:I$(e.x,r,i),y:I$(e.y,n,o)}}function T$(e,t){var n,r=t.min-e.min,o=t.max-e.max;return t.max-t.minr?n=pd(t.min,t.max-r,e.min):r>o&&(n=pd(e.min,e.max-o,t.min)),gv(0,1,n)}function Voe(e,t){var n={};return t.min!==void 0&&(n.min=t.min-e.min),t.max!==void 0&&(n.max=t.max-e.min),n}var by=.35;function Woe(e){return e===void 0&&(e=by),e===!1?e=0:e===!0&&(e=by),{x:P$(e,"left","right"),y:P$(e,"top","bottom")}}function P$(e,t,n){return{min:M$(e,t),max:M$(e,n)}}function M$(e,t){var n;return typeof e=="number"?e:(n=e[t])!==null&&n!==void 0?n:0}var N$=function(){return{translate:0,scale:1,origin:0,originPoint:0}},Lu=function(){return{x:N$(),y:N$()}},R$=function(){return{min:0,max:0}},Fr=function(){return{x:R$(),y:R$()}};function Ti(e){return[e("x"),e("y")]}function E4(e){var t=e.top,n=e.left,r=e.right,o=e.bottom;return{x:{min:n,max:r},y:{min:t,max:o}}}function Uoe(e){var t=e.x,n=e.y;return{top:n.min,right:t.max,bottom:n.max,left:t.min}}function Koe(e,t){if(!t)return e;var n=t({x:e.left,y:e.top}),r=t({x:e.right,y:e.bottom});return{top:n.y,left:n.x,bottom:r.y,right:r.x}}function w0(e){return e===void 0||e===1}function k4(e){var t=e.scale,n=e.scaleX,r=e.scaleY;return!w0(t)||!w0(n)||!w0(r)}function Da(e){return k4(e)||D$(e.x)||D$(e.y)||e.z||e.rotate||e.rotateX||e.rotateY}function D$(e){return e&&e!=="0%"}function xv(e,t,n){var r=e-n,o=t*r;return n+o}function j$(e,t,n,r,o){return o!==void 0&&(e=xv(e,o,r)),xv(e,n,r)+t}function yy(e,t,n,r,o){t===void 0&&(t=0),n===void 0&&(n=1),e.min=j$(e.min,t,n,r,o),e.max=j$(e.max,t,n,r,o)}function O4(e,t){var n=t.x,r=t.y;yy(e.x,n.translate,n.scale,n.originPoint),yy(e.y,r.translate,r.scale,r.originPoint)}function qoe(e,t,n,r){var o,i;r===void 0&&(r=!1);var a=n.length;if(a){t.x=t.y=1;for(var s,c,u=0;ut?n="y":Math.abs(e.x)>t&&(n="x"),n}function eie(e){var t=e.dragControls,n=e.visualElement,r=bh(function(){return new Zoe(n)});d.useEffect(function(){return t&&t.subscribe(r)},[r,t]),d.useEffect(function(){return r.addListeners()},[r])}function tie(e){var t=e.onPan,n=e.onPanStart,r=e.onPanEnd,o=e.onPanSessionStart,i=e.visualElement,a=t||n||r||o,s=d.useRef(null),c=d.useContext(h1).transformPagePoint,u={onSessionStart:o,onStart:n,onMove:t,onEnd:function(v,h){s.current=null,r&&r(v,h)}};d.useEffect(function(){s.current!==null&&s.current.updateHandlers(u)});function p(v){s.current=new S4(v,u,{transformPagePoint:c})}hv(i,"pointerdown",a&&p),YR(function(){return s.current&&s.current.end()})}var nie={pan:_a(tie),drag:_a(eie)},bp=["LayoutMeasure","BeforeLayoutMeasure","LayoutUpdate","ViewportBoxUpdate","Update","Render","AnimationComplete","LayoutAnimationComplete","AnimationStart","LayoutAnimationStart","SetAxisTarget","Unmount"];function rie(){var e=bp.map(function(){return new Ru}),t={},n={clearAllListeners:function(){return e.forEach(function(r){return r.clear()})},updatePropListeners:function(r){bp.forEach(function(o){var i,a="on"+o,s=r[a];(i=t[o])===null||i===void 0||i.call(t),s&&(t[o]=n[a](s))})}};return e.forEach(function(r,o){n["on"+bp[o]]=function(i){return r.add(i)},n["notify"+bp[o]]=function(){for(var i=[],a=0;a=0?window.pageYOffset:null,u=pie(t,e,s);return i.length&&i.forEach(function(p){var v=wr(p,2),h=v[0],m=v[1];e.getValue(h).set(m)}),e.syncRender(),c!==null&&window.scrollTo({top:c}),{target:u,transitionEnd:r}}else return{target:t,transitionEnd:r}};function hie(e,t,n,r){return cie(t)?vie(e,t,n,r):{target:t,transitionEnd:r}}var gie=function(e,t,n,r){var o=sie(e,t,r);return t=o.target,r=o.transitionEnd,hie(e,t,n,r)};function mie(e){return window.getComputedStyle(e)}var R4={treeType:"dom",readValueFromInstance:function(e,t){if(jd(t)){var n=P1(t);return n&&n.default||0}else{var r=mie(e);return(CR(t)?r.getPropertyValue(t):r[t])||0}},sortNodePosition:function(e,t){return e.compareDocumentPosition(t)&2?1:-1},getBaseTarget:function(e,t){var n;return(n=e.style)===null||n===void 0?void 0:n[t]},measureViewportBox:function(e,t){var n=t.transformPagePoint;return $4(e,n)},resetTransform:function(e,t,n){var r=n.transformTemplate;t.style.transform=r?r({},""):"none",e.scheduleRender()},restoreTransform:function(e,t){e.style.transform=t.style.transform},removeValueFromRenderState:function(e,t){var n=t.vars,r=t.style;delete n[e],delete r[e]},makeTargetAnimatable:function(e,t,n,r){var o=n.transformValues;r===void 0&&(r=!0);var i=t.transition,a=t.transitionEnd,s=Yo(t,["transition","transitionEnd"]),c=Eoe(s,i||{},e);if(o&&(a&&(a=o(a)),s&&(s=o(s)),c&&(c=o(c))),r){Soe(e,s,c);var u=gie(e,s,c,a);a=u.transitionEnd,s=u.target}return nt({transition:i,transitionEnd:a},s)},scrapeMotionValuesFromProps:x1,build:function(e,t,n,r,o){e.isVisible!==void 0&&(t.style.visibility=e.isVisible?"visible":"hidden"),b1(t,n,r,o.transformTemplate)},render:LR},bie=I4(R4),yie=I4(nt(nt({},R4),{getBaseTarget:function(e,t){return e[t]},readValueFromInstance:function(e,t){var n;return jd(t)?((n=P1(t))===null||n===void 0?void 0:n.default)||0:(t=BR.has(t)?t:jR(t),e.getAttribute(t))},scrapeMotionValuesFromProps:zR,build:function(e,t,n,r,o){w1(t,n,r,o.transformTemplate)},render:AR})),wie=function(e,t){return g1(e)?yie(t,{enableHardwareAcceleration:!1}):bie(t,{enableHardwareAcceleration:!0})};function _$(e,t){return t.max===t.min?0:e/(t.max-t.min)*100}var su={correct:function(e,t){if(!t.target)return e;if(typeof e=="string")if(Ut.test(e))e=parseFloat(e);else return e;var n=_$(e,t.target.x),r=_$(e,t.target.y);return"".concat(n,"% ").concat(r,"%")}},V$="_$css",xie={correct:function(e,t){var n=t.treeScale,r=t.projectionDelta,o=e,i=e.includes("var("),a=[];i&&(e=e.replace(P4,function(y){return a.push(y),V$}));var s=aa.parse(e);if(s.length>5)return o;var c=aa.createTransformer(e),u=typeof s[0]!="number"?1:0,p=r.x.scale*n.x,v=r.y.scale*n.y;s[0+u]/=p,s[1+u]/=v;var h=er(p,v,.5);typeof s[2+u]=="number"&&(s[2+u]/=h),typeof s[3+u]=="number"&&(s[3+u]/=h);var m=c(s);if(i){var b=0;m=m.replace(V$,function(){var y=a[b];return b++,y})}return m}},Sie=function(e){dR(t,e);function t(){return e!==null&&e.apply(this,arguments)||this}return t.prototype.componentDidMount=function(){var n=this,r=this.props,o=r.visualElement,i=r.layoutGroup,a=r.switchLayoutGroup,s=r.layoutId,c=o.projection;qte(Eie),c&&(i!=null&&i.group&&i.group.add(c),a!=null&&a.register&&s&&a.register(c),c.root.didUpdate(),c.addEventListener("animationComplete",function(){n.safeToRemove()}),c.setOptions(nt(nt({},c.options),{onExitComplete:function(){return n.safeToRemove()}}))),Pu.hasEverUpdated=!0},t.prototype.getSnapshotBeforeUpdate=function(n){var r=this,o=this.props,i=o.layoutDependency,a=o.visualElement,s=o.drag,c=o.isPresent,u=a.projection;return u&&(u.isPresent=c,s||n.layoutDependency!==i||i===void 0?u.willUpdate():this.safeToRemove(),n.isPresent!==c&&(c?u.promote():u.relegate()||Bi.postRender(function(){var p;!((p=u.getStack())===null||p===void 0)&&p.members.length||r.safeToRemove()}))),null},t.prototype.componentDidUpdate=function(){var n=this.props.visualElement.projection;n&&(n.root.didUpdate(),!n.currentAnimation&&n.isLead()&&this.safeToRemove())},t.prototype.componentWillUnmount=function(){var n=this.props,r=n.visualElement,o=n.layoutGroup,i=n.switchLayoutGroup,a=r.projection;a&&(a.scheduleCheckAfterUnmount(),o!=null&&o.group&&o.group.remove(a),i!=null&&i.deregister&&i.deregister(a))},t.prototype.safeToRemove=function(){var n=this.props.safeToRemove;n==null||n()},t.prototype.render=function(){return null},t}(ue.Component);function Cie(e){var t=wr(m4(),2),n=t[0],r=t[1],o=d.useContext(yR);return ue.createElement(Sie,nt({},e,{layoutGroup:o,switchLayoutGroup:d.useContext(wR),isPresent:n,safeToRemove:r}))}var Eie={borderRadius:nt(nt({},su),{applyTo:["borderTopLeftRadius","borderTopRightRadius","borderBottomLeftRadius","borderBottomRightRadius"]}),borderTopLeftRadius:su,borderTopRightRadius:su,borderBottomLeftRadius:su,borderBottomRightRadius:su,boxShadow:xie},kie={measureLayout:Cie};function Oie(e,t,n){n===void 0&&(n={});var r=ia(e)?e:rc(e);return R1("",r,t,n),{stop:function(){return r.stop()},isAnimating:function(){return r.isAnimating()}}}var D4=["TopLeft","TopRight","BottomLeft","BottomRight"],$ie=D4.length,W$=function(e){return typeof e=="string"?parseFloat(e):e},U$=function(e){return typeof e=="number"||Ut.test(e)};function Iie(e,t,n,r,o,i){var a,s,c,u;o?(e.opacity=er(0,(a=n.opacity)!==null&&a!==void 0?a:1,Tie(r)),e.opacityExit=er((s=t.opacity)!==null&&s!==void 0?s:1,0,Pie(r))):i&&(e.opacity=er((c=t.opacity)!==null&&c!==void 0?c:1,(u=n.opacity)!==null&&u!==void 0?u:1,r));for(var p=0;p<$ie;p++){var v="border".concat(D4[p],"Radius"),h=K$(t,v),m=K$(n,v);if(!(h===void 0&&m===void 0)){h||(h=0),m||(m=0);var b=h===0||m===0||U$(h)===U$(m);b?(e[v]=Math.max(er(W$(h),W$(m),r),0),(Li.test(m)||Li.test(h))&&(e[v]+="%")):e[v]=m}}(t.rotate||n.rotate)&&(e.rotate=er(t.rotate||0,n.rotate||0,r))}function K$(e,t){var n;return(n=e[t])!==null&&n!==void 0?n:e.borderRadius}var Tie=j4(0,.5,I1),Pie=j4(.5,.95,O1);function j4(e,t,n){return function(r){return rt?1:n(pd(e,t,r))}}function q$(e,t){e.min=t.min,e.max=t.max}function ri(e,t){q$(e.x,t.x),q$(e.y,t.y)}function X$(e,t,n,r,o){return e-=t,e=xv(e,1/n,r),o!==void 0&&(e=xv(e,1/o,r)),e}function Mie(e,t,n,r,o,i,a){if(t===void 0&&(t=0),n===void 0&&(n=1),r===void 0&&(r=.5),i===void 0&&(i=e),a===void 0&&(a=e),Li.test(t)){t=parseFloat(t);var s=er(a.min,a.max,t/100);t=s-a.min}if(typeof t=="number"){var c=er(i.min,i.max,r);e===i&&(c-=t),e.min=X$(e.min,t,n,c,o),e.max=X$(e.max,t,n,c,o)}}function G$(e,t,n,r,o){var i=wr(n,3),a=i[0],s=i[1],c=i[2];Mie(e,t[a],t[s],t[c],t.scale,r,o)}var Nie=["x","scaleX","originX"],Rie=["y","scaleY","originY"];function Y$(e,t,n,r){G$(e.x,t,Nie,n==null?void 0:n.x,r==null?void 0:r.x),G$(e.y,t,Rie,n==null?void 0:n.y,r==null?void 0:r.y)}function Q$(e){return e.translate===0&&e.scale===1}function L4(e){return Q$(e.x)&&Q$(e.y)}function B4(e,t){return e.x.min===t.x.min&&e.x.max===t.x.max&&e.y.min===t.y.min&&e.y.max===t.y.max}var Die=function(){function e(){this.members=[]}return e.prototype.add=function(t){D1(this.members,t),t.scheduleRender()},e.prototype.remove=function(t){if(j1(this.members,t),t===this.prevLead&&(this.prevLead=void 0),t===this.lead){var n=this.members[this.members.length-1];n&&this.promote(n)}},e.prototype.relegate=function(t){var n=this.members.findIndex(function(a){return t===a});if(n===0)return!1;for(var r,o=n;o>=0;o--){var i=this.members[o];if(i.isPresent!==!1){r=i;break}}return r?(this.promote(r),!0):!1},e.prototype.promote=function(t,n){var r,o=this.lead;if(t!==o&&(this.prevLead=o,this.lead=t,t.show(),o)){o.instance&&o.scheduleRender(),t.scheduleRender(),t.resumeFrom=o,n&&(t.resumeFrom.preserveOpacity=!0),o.snapshot&&(t.snapshot=o.snapshot,t.snapshot.latestValues=o.animationValues||o.latestValues,t.snapshot.isShared=!0),!((r=t.root)===null||r===void 0)&&r.isUpdating&&(t.isLayoutDirty=!0);var i=t.options.crossfade;i===!1&&o.hide()}},e.prototype.exitAnimationComplete=function(){this.members.forEach(function(t){var n,r,o,i,a;(r=(n=t.options).onExitComplete)===null||r===void 0||r.call(n),(a=(o=t.resumingFrom)===null||o===void 0?void 0:(i=o.options).onExitComplete)===null||a===void 0||a.call(i)})},e.prototype.scheduleRender=function(){this.members.forEach(function(t){t.instance&&t.scheduleRender(!1)})},e.prototype.removeLeadSnapshot=function(){this.lead&&this.lead.snapshot&&(this.lead.snapshot=void 0)},e}(),jie="translate3d(0px, 0px, 0) scale(1, 1) scale(1, 1)";function Z$(e,t,n){var r=e.x.translate/t.x,o=e.y.translate/t.y,i="translate3d(".concat(r,"px, ").concat(o,"px, 0) ");if(i+="scale(".concat(1/t.x,", ").concat(1/t.y,") "),n){var a=n.rotate,s=n.rotateX,c=n.rotateY;a&&(i+="rotate(".concat(a,"deg) ")),s&&(i+="rotateX(".concat(s,"deg) ")),c&&(i+="rotateY(".concat(c,"deg) "))}var u=e.x.scale*t.x,p=e.y.scale*t.y;return i+="scale(".concat(u,", ").concat(p,")"),i===jie?"none":i}var Lie=function(e,t){return e.depth-t.depth},Bie=function(){function e(){this.children=[],this.isDirty=!1}return e.prototype.add=function(t){D1(this.children,t),this.isDirty=!0},e.prototype.remove=function(t){j1(this.children,t),this.isDirty=!0},e.prototype.forEach=function(t){this.isDirty&&this.children.sort(Lie),this.isDirty=!1,this.children.forEach(t)},e}(),J$=1e3;function A4(e){var t=e.attachResizeListener,n=e.defaultParent,r=e.measureScroll,o=e.checkIsScrollRoot,i=e.resetTransform;return function(){function a(s,c,u){var p=this;c===void 0&&(c={}),u===void 0&&(u=n==null?void 0:n()),this.children=new Set,this.options={},this.isTreeAnimating=!1,this.isAnimationBlocked=!1,this.isLayoutDirty=!1,this.updateManuallyBlocked=!1,this.updateBlockedByResize=!1,this.isUpdating=!1,this.isSVG=!1,this.needsReset=!1,this.shouldResetTransform=!1,this.treeScale={x:1,y:1},this.eventHandlers=new Map,this.potentialNodes=new Map,this.checkUpdateFailed=function(){p.isUpdating&&(p.isUpdating=!1,p.clearAllSnapshots())},this.updateProjection=function(){p.nodes.forEach(Vie),p.nodes.forEach(Wie)},this.hasProjected=!1,this.isVisible=!0,this.animationProgress=0,this.sharedNodes=new Map,this.id=s,this.latestValues=c,this.root=u?u.root||u:this,this.path=u?ji(ji([],wr(u.path),!1),[u],!1):[],this.parent=u,this.depth=u?u.depth+1:0,s&&this.root.registerPotentialNode(s,this);for(var v=0;v=0;r--)if(e.path[r].instance){n=e.path[r];break}var o=n&&n!==e.root?n.instance:document,i=o.querySelector('[data-projection-id="'.concat(t,'"]'));i&&e.mount(i,!0)}function r2(e){e.min=Math.round(e.min),e.max=Math.round(e.max)}function o2(e){r2(e.x),r2(e.y)}var Qie=A4({attachResizeListener:function(e,t){return yh(e,"resize",t)},measureScroll:function(){return{x:document.documentElement.scrollLeft||document.body.scrollLeft,y:document.documentElement.scrollTop||document.body.scrollTop}},checkIsScrollRoot:function(){return!0}}),x0={current:void 0},Zie=A4({measureScroll:function(e){return{x:e.scrollLeft,y:e.scrollTop}},defaultParent:function(){if(!x0.current){var e=new Qie(0,{});e.mount(window),e.setOptions({layoutScroll:!0}),x0.current=e}return x0.current},resetTransform:function(e,t){e.style.transform=t??"none"},checkIsScrollRoot:function(e){return window.getComputedStyle(e).position==="fixed"}}),Jie=nt(nt(nt(nt({},joe),Jre),nie),kie),eae=Ute(function(e,t){return Pne(e,t,Jie,wie,Zie)});function Sy(e,t,n){var r,o,i,a,s;t==null&&(t=100);function c(){var p=Date.now()-a;p=0?r=setTimeout(c,t-p):(r=null,n||(s=e.apply(i,o),i=o=null))}var u=function(){i=this,o=arguments,a=Date.now();var p=n&&!r;return r||(r=setTimeout(c,t)),p&&(s=e.apply(i,o),i=o=null),s};return u.clear=function(){r&&(clearTimeout(r),r=null)},u.flush=function(){r&&(s=e.apply(i,o),i=o=null,clearTimeout(r),r=null)},u}Sy.debounce=Sy;var tae=Sy;const i2=js(tae);function nae(e){let{debounce:t,scroll:n,polyfill:r,offsetSize:o}=e===void 0?{debounce:0,scroll:!1,offsetSize:!1}:e;const i=r||(typeof window>"u"?class{}:window.ResizeObserver);if(!i)throw new Error("This browser does not support ResizeObserver out of the box. See: https://github.com/react-spring/react-use-measure/#resize-observer-polyfills");const[a,s]=d.useState({left:0,top:0,width:0,height:0,bottom:0,right:0,x:0,y:0}),c=d.useRef({element:null,scrollContainers:null,resizeObserver:null,lastBounds:a}),u=t?typeof t=="number"?t:t.scroll:null,p=t?typeof t=="number"?t:t.resize:null,v=d.useRef(!1);d.useEffect(()=>(v.current=!0,()=>void(v.current=!1)));const[h,m,b]=d.useMemo(()=>{const S=()=>{if(!c.current.element)return;const{left:E,top:k,width:O,height:$,bottom:T,right:M,x:P,y:R}=c.current.element.getBoundingClientRect(),A={left:E,top:k,width:O,height:$,bottom:T,right:M,x:P,y:R};c.current.element instanceof HTMLElement&&o&&(A.height=c.current.element.offsetHeight,A.width=c.current.element.offsetWidth),Object.freeze(A),v.current&&!aae(c.current.lastBounds,A)&&s(c.current.lastBounds=A)};return[S,p?i2(S,p):S,u?i2(S,u):S]},[s,o,u,p]);function y(){c.current.scrollContainers&&(c.current.scrollContainers.forEach(S=>S.removeEventListener("scroll",b,!0)),c.current.scrollContainers=null),c.current.resizeObserver&&(c.current.resizeObserver.disconnect(),c.current.resizeObserver=null)}function w(){c.current.element&&(c.current.resizeObserver=new i(b),c.current.resizeObserver.observe(c.current.element),n&&c.current.scrollContainers&&c.current.scrollContainers.forEach(S=>S.addEventListener("scroll",b,{capture:!0,passive:!0})))}const C=S=>{!S||S===c.current.element||(y(),c.current.element=S,c.current.scrollContainers=z4(S),w())};return oae(b,!!n),rae(m),d.useEffect(()=>{y(),w()},[n,b,m]),d.useEffect(()=>y,[]),[C,a,h]}function rae(e){d.useEffect(()=>{const t=e;return window.addEventListener("resize",t),()=>void window.removeEventListener("resize",t)},[e])}function oae(e,t){d.useEffect(()=>{if(t){const n=e;return window.addEventListener("scroll",n,{capture:!0,passive:!0}),()=>void window.removeEventListener("scroll",n,!0)}},[e,t])}function z4(e){const t=[];if(!e||e===document.body)return t;const{overflow:n,overflowX:r,overflowY:o}=window.getComputedStyle(e);return[n,r,o].some(i=>i==="auto"||i==="scroll")&&t.push(e),[...t,...z4(e.parentElement)]}const iae=["x","y","top","bottom","left","right","width","height"],aae=(e,t)=>iae.every(n=>e[n]===t[n]);function S0(e,t,n){return t in e?Object.defineProperty(e,t,{value:n,enumerable:!0,configurable:!0,writable:!0}):e[t]=n,e}function sae(e,t,n){lae(e,t),t.set(e,n)}function lae(e,t){if(t.has(e))throw new TypeError("Cannot initialize the same private elements twice on an object")}function yp(e,t){var n=H4(e,t,"get");return cae(e,n)}function cae(e,t){return t.get?t.get.call(e):t.value}function uae(e,t,n){var r=H4(e,t,"set");return dae(e,r,n),n}function H4(e,t,n){if(!t.has(e))throw new TypeError("attempted to "+n+" private field on non-instance");return t.get(e)}function dae(e,t,n){if(t.set)t.set.call(e,n);else{if(!t.writable)throw new TypeError("attempted to set read only private field");t.value=n}}var yl=new WeakMap;class fae{constructor(){sae(this,yl,{writable:!0,value:void 0}),S0(this,"register",t=>{yp(this,yl).push(t)}),S0(this,"unregister",t=>{let n;for(;(n=yp(this,yl).indexOf(t))!==-1;)yp(this,yl).splice(n,1)}),S0(this,"backendChanged",t=>{for(const n of yp(this,yl))n.backendChanged(t)}),uae(this,yl,[])}}function oi(e,t,n){pae(e,t),t.set(e,n)}function pae(e,t){if(t.has(e))throw new TypeError("Cannot initialize the same private elements twice on an object")}function Ii(e,t,n){return t in e?Object.defineProperty(e,t,{value:n,enumerable:!0,configurable:!0,writable:!0}):e[t]=n,e}function rn(e,t){var n=F4(e,t,"get");return vae(e,n)}function vae(e,t){return t.get?t.get.call(e):t.value}function wl(e,t,n){var r=F4(e,t,"set");return hae(e,r,n),n}function F4(e,t,n){if(!t.has(e))throw new TypeError("attempted to "+n+" private field on non-instance");return t.get(e)}function hae(e,t,n){if(t.set)t.set.call(e,n);else{if(!t.writable)throw new TypeError("attempted to set read only private field");t.value=n}}var Bo=new WeakMap,wp=new WeakMap,Ao=new WeakMap,Ta=new WeakMap,gs=new WeakMap,a2=new WeakMap,s2=new WeakMap,l2=new WeakMap,C0=new WeakMap,E0=new WeakMap,xp=new WeakMap;class Bl{constructor(t,n,r){if(oi(this,Bo,{writable:!0,value:void 0}),oi(this,wp,{writable:!0,value:void 0}),oi(this,Ao,{writable:!0,value:void 0}),oi(this,Ta,{writable:!0,value:void 0}),oi(this,gs,{writable:!0,value:void 0}),oi(this,a2,{writable:!0,value:(o,i,a)=>{var s,c;if(!a.backend)throw new Error("You must specify a 'backend' property in your Backend entry: ".concat(JSON.stringify(a)));const u=a.backend(o,i,a.options);let p=a.id;const v=!a.id&&u&&u.constructor;if(v&&(p=u.constructor.name),!p)throw new Error("You must specify an 'id' property in your Backend entry: ".concat(JSON.stringify(a),` - see this guide: https://github.com/louisbrunner/dnd-multi-backend/tree/master/packages/react-dnd-multi-backend#migrating-from-5xx`));if(rn(this,Ao)[p])throw new Error(`You must specify a unique 'id' property in your Backend entry: - `.concat(JSON.stringify(a)," (conflicts with: ").concat(JSON.stringify(rn(this,Ao)[p]),")"));return{id:p,instance:u,preview:(s=a.preview)!==null&&s!==void 0?s:!1,transition:a.transition,skipDispatchOnTransition:(c=a.skipDispatchOnTransition)!==null&&c!==void 0?c:!1}}}),Ii(this,"setup",()=>{if(!(typeof window>"u")){if(Bl.isSetUp)throw new Error("Cannot have two MultiBackends at the same time.");Bl.isSetUp=!0,rn(this,s2).call(this,window),rn(this,Ao)[rn(this,Bo)].instance.setup()}}),Ii(this,"teardown",()=>{typeof window>"u"||(Bl.isSetUp=!1,rn(this,l2).call(this,window),rn(this,Ao)[rn(this,Bo)].instance.teardown())}),Ii(this,"connectDragSource",(o,i,a)=>rn(this,xp).call(this,"connectDragSource",o,i,a)),Ii(this,"connectDragPreview",(o,i,a)=>rn(this,xp).call(this,"connectDragPreview",o,i,a)),Ii(this,"connectDropTarget",(o,i,a)=>rn(this,xp).call(this,"connectDropTarget",o,i,a)),Ii(this,"profile",()=>rn(this,Ao)[rn(this,Bo)].instance.profile()),Ii(this,"previewEnabled",()=>rn(this,Ao)[rn(this,Bo)].preview),Ii(this,"previewsList",()=>rn(this,wp)),Ii(this,"backendsList",()=>rn(this,Ta)),oi(this,s2,{writable:!0,value:o=>{rn(this,Ta).forEach(i=>{i.transition&&o.addEventListener(i.transition.event,rn(this,C0))})}}),oi(this,l2,{writable:!0,value:o=>{rn(this,Ta).forEach(i=>{i.transition&&o.removeEventListener(i.transition.event,rn(this,C0))})}}),oi(this,C0,{writable:!0,value:o=>{const i=rn(this,Bo);if(rn(this,Ta).some(s=>s.id!==rn(this,Bo)&&s.transition&&s.transition.check(o)?(wl(this,Bo,s.id),!0):!1),rn(this,Bo)!==i){var a;rn(this,Ao)[i].instance.teardown(),Object.keys(rn(this,gs)).forEach(p=>{const v=rn(this,gs)[p];v.unsubscribe(),v.unsubscribe=rn(this,E0).call(this,v.func,...v.args)}),rn(this,wp).backendChanged(this);const s=rn(this,Ao)[rn(this,Bo)];if(s.instance.setup(),s.skipDispatchOnTransition)return;const c=o.constructor,u=new c(o.type,o);(a=o.target)===null||a===void 0||a.dispatchEvent(u)}}}),oi(this,E0,{writable:!0,value:(o,i,a,s)=>rn(this,Ao)[rn(this,Bo)].instance[o](i,a,s)}),oi(this,xp,{writable:!0,value:(o,i,a,s)=>{const c="".concat(o,"_").concat(i),u=rn(this,E0).call(this,o,i,a,s);return rn(this,gs)[c]={func:o,args:[i,a,s],unsubscribe:u},()=>{rn(this,gs)[c].unsubscribe(),delete rn(this,gs)[c]}}}),!r||!r.backends||r.backends.length<1)throw new Error(`You must specify at least one Backend, if you are coming from 2.x.x (or don't understand this error) - see this guide: https://github.com/louisbrunner/dnd-multi-backend/tree/master/packages/react-dnd-multi-backend#migrating-from-2xx`);wl(this,wp,new fae),wl(this,Ao,{}),wl(this,Ta,[]),r.backends.forEach(o=>{const i=rn(this,a2).call(this,t,n,o);rn(this,Ao)[i.id]=i,rn(this,Ta).push(i)}),wl(this,Bo,rn(this,Ta)[0].id),wl(this,gs,{})}}Ii(Bl,"isSetUp",!1);const gae=(e,t,n)=>new Bl(e,t,n),_4=(e,t)=>({event:e,check:t}),mae=_4("touchstart",e=>{const t=e;return t.touches!==null&&t.touches!==void 0}),bae=_4("pointerdown",e=>e.pointerType=="mouse");var Wa;(function(e){e.mouse="mouse",e.touch="touch",e.keyboard="keyboard"})(Wa||(Wa={}));class yae{get delay(){var t;return(t=this.args.delay)!==null&&t!==void 0?t:0}get scrollAngleRanges(){return this.args.scrollAngleRanges}get getDropTargetElementsAtPoint(){return this.args.getDropTargetElementsAtPoint}get ignoreContextMenu(){var t;return(t=this.args.ignoreContextMenu)!==null&&t!==void 0?t:!1}get enableHoverOutsideTarget(){var t;return(t=this.args.enableHoverOutsideTarget)!==null&&t!==void 0?t:!1}get enableKeyboardEvents(){var t;return(t=this.args.enableKeyboardEvents)!==null&&t!==void 0?t:!1}get enableMouseEvents(){var t;return(t=this.args.enableMouseEvents)!==null&&t!==void 0?t:!1}get enableTouchEvents(){var t;return(t=this.args.enableTouchEvents)!==null&&t!==void 0?t:!0}get touchSlop(){return this.args.touchSlop||0}get delayTouchStart(){var t,n,r,o;return(o=(r=(t=this.args)===null||t===void 0?void 0:t.delayTouchStart)!==null&&r!==void 0?r:(n=this.args)===null||n===void 0?void 0:n.delay)!==null&&o!==void 0?o:0}get delayMouseStart(){var t,n,r,o;return(o=(r=(t=this.args)===null||t===void 0?void 0:t.delayMouseStart)!==null&&r!==void 0?r:(n=this.args)===null||n===void 0?void 0:n.delay)!==null&&o!==void 0?o:0}get window(){if(this.context&&this.context.window)return this.context.window;if(typeof window<"u")return window}get document(){var t;if(!((t=this.context)===null||t===void 0)&&t.document)return this.context.document;if(this.window)return this.window.document}get rootElement(){var t;return((t=this.args)===null||t===void 0?void 0:t.rootElement)||this.document}constructor(t,n){this.args=t,this.context=n}}function wae(e,t,n,r){return Math.sqrt(Math.pow(Math.abs(n-e),2)+Math.pow(Math.abs(r-t),2))}function xae(e,t,n,r,o){if(!o)return!1;const i=Math.atan2(r-t,n-e)*180/Math.PI+180;for(let a=0;a=s.start)&&(s.end==null||i<=s.end))return!0}return!1}const Sae={Left:1,Right:2,Center:4},Cae={Left:0,Center:1,Right:2};function k0(e){return e.button===void 0||e.button===Cae.Left}function Eae(e){return e.buttons===void 0||(e.buttons&Sae.Left)===0}function V4(e){return!!e.targetTouches}const kae=1;function Oae(e){const t=e.nodeType===kae?e:e.parentElement;if(!t)return;const{top:n,left:r}=t.getBoundingClientRect();return{x:r,y:n}}function $ae(e,t){if(e.targetTouches.length===1)return Sv(e.targetTouches[0]);if(t&&e.touches.length===1&&e.touches[0].target===t.target)return Sv(e.touches[0])}function Sv(e,t){return V4(e)?$ae(e,t):{x:e.clientX,y:e.clientY}}const c2=(()=>{let e=!1;try{addEventListener("test",()=>{},Object.defineProperty({},"passive",{get(){return e=!0,!0}}))}catch{}return e})(),lu={[Wa.mouse]:{start:"mousedown",move:"mousemove",end:"mouseup",contextmenu:"contextmenu"},[Wa.touch]:{start:"touchstart",move:"touchmove",end:"touchend"},[Wa.keyboard]:{keydown:"keydown"}};class Bu{profile(){var t;return{sourceNodes:this.sourceNodes.size,sourcePreviewNodes:this.sourcePreviewNodes.size,sourcePreviewNodeOptions:this.sourcePreviewNodeOptions.size,targetNodes:this.targetNodes.size,dragOverTargetIds:((t=this.dragOverTargetIds)===null||t===void 0?void 0:t.length)||0}}get document(){return this.options.document}setup(){const t=this.options.rootElement;t&&(nn(!Bu.isSetUp,"Cannot have two Touch backends at the same time."),Bu.isSetUp=!0,this.addEventListener(t,"start",this.getTopMoveStartHandler()),this.addEventListener(t,"start",this.handleTopMoveStartCapture,!0),this.addEventListener(t,"move",this.handleTopMove),this.addEventListener(t,"move",this.handleTopMoveCapture,!0),this.addEventListener(t,"end",this.handleTopMoveEndCapture,!0),this.options.enableMouseEvents&&!this.options.ignoreContextMenu&&this.addEventListener(t,"contextmenu",this.handleTopMoveEndCapture),this.options.enableKeyboardEvents&&this.addEventListener(t,"keydown",this.handleCancelOnEscape,!0))}teardown(){const t=this.options.rootElement;t&&(Bu.isSetUp=!1,this._mouseClientOffset={},this.removeEventListener(t,"start",this.handleTopMoveStartCapture,!0),this.removeEventListener(t,"start",this.handleTopMoveStart),this.removeEventListener(t,"move",this.handleTopMoveCapture,!0),this.removeEventListener(t,"move",this.handleTopMove),this.removeEventListener(t,"end",this.handleTopMoveEndCapture,!0),this.options.enableMouseEvents&&!this.options.ignoreContextMenu&&this.removeEventListener(t,"contextmenu",this.handleTopMoveEndCapture),this.options.enableKeyboardEvents&&this.removeEventListener(t,"keydown",this.handleCancelOnEscape,!0),this.uninstallSourceNodeRemovalObserver())}addEventListener(t,n,r,o=!1){const i=c2?{capture:o,passive:!1}:o;this.listenerTypes.forEach(function(a){const s=lu[a][n];s&&t.addEventListener(s,r,i)})}removeEventListener(t,n,r,o=!1){const i=c2?{capture:o,passive:!1}:o;this.listenerTypes.forEach(function(a){const s=lu[a][n];s&&t.removeEventListener(s,r,i)})}connectDragSource(t,n){const r=this.handleMoveStart.bind(this,t);return this.sourceNodes.set(t,n),this.addEventListener(n,"start",r),()=>{this.sourceNodes.delete(t),this.removeEventListener(n,"start",r)}}connectDragPreview(t,n,r){return this.sourcePreviewNodeOptions.set(t,r),this.sourcePreviewNodes.set(t,n),()=>{this.sourcePreviewNodes.delete(t),this.sourcePreviewNodeOptions.delete(t)}}connectDropTarget(t,n){const r=this.options.rootElement;if(!this.document||!r)return()=>{};const o=i=>{if(!this.document||!r||!this.monitor.isDragging())return;let a;switch(i.type){case lu.mouse.move:a={x:i.clientX,y:i.clientY};break;case lu.touch.move:var s,c;a={x:((s=i.touches[0])===null||s===void 0?void 0:s.clientX)||0,y:((c=i.touches[0])===null||c===void 0?void 0:c.clientY)||0};break}const u=a!=null?this.document.elementFromPoint(a.x,a.y):void 0,p=u&&n.contains(u);if(u===n||p)return this.handleMove(i,t)};return this.addEventListener(this.document.body,"move",o),this.targetNodes.set(t,n),()=>{this.document&&(this.targetNodes.delete(t),this.removeEventListener(this.document.body,"move",o))}}getTopMoveStartHandler(){return!this.options.delayTouchStart&&!this.options.delayMouseStart?this.handleTopMoveStart:this.handleTopMoveStartDelay}installSourceNodeRemovalObserver(t){this.uninstallSourceNodeRemovalObserver(),this.draggedSourceNode=t,this.draggedSourceNodeRemovalObserver=new MutationObserver(()=>{t&&!t.parentElement&&(this.resurrectSourceNode(),this.uninstallSourceNodeRemovalObserver())}),!(!t||!t.parentElement)&&this.draggedSourceNodeRemovalObserver.observe(t.parentElement,{childList:!0})}resurrectSourceNode(){this.document&&this.draggedSourceNode&&(this.draggedSourceNode.style.display="none",this.draggedSourceNode.removeAttribute("data-reactid"),this.document.body.appendChild(this.draggedSourceNode))}uninstallSourceNodeRemovalObserver(){this.draggedSourceNodeRemovalObserver&&this.draggedSourceNodeRemovalObserver.disconnect(),this.draggedSourceNodeRemovalObserver=void 0,this.draggedSourceNode=void 0}constructor(t,n,r){this.getSourceClientOffset=o=>{const i=this.sourceNodes.get(o);return i&&Oae(i)},this.handleTopMoveStartCapture=o=>{k0(o)&&(this.moveStartSourceIds=[])},this.handleMoveStart=o=>{Array.isArray(this.moveStartSourceIds)&&this.moveStartSourceIds.unshift(o)},this.handleTopMoveStart=o=>{if(!k0(o))return;const i=Sv(o);i&&(V4(o)&&(this.lastTargetTouchFallback=o.targetTouches[0]),this._mouseClientOffset=i),this.waitingForDelay=!1},this.handleTopMoveStartDelay=o=>{if(!k0(o))return;const i=o.type===lu.touch.start?this.options.delayTouchStart:this.options.delayMouseStart;this.timeout=setTimeout(this.handleTopMoveStart.bind(this,o),i),this.waitingForDelay=!0},this.handleTopMoveCapture=()=>{this.dragOverTargetIds=[]},this.handleMove=(o,i)=>{this.dragOverTargetIds&&this.dragOverTargetIds.unshift(i)},this.handleTopMove=o=>{if(this.timeout&&clearTimeout(this.timeout),!this.document||this.waitingForDelay)return;const{moveStartSourceIds:i,dragOverTargetIds:a}=this,s=this.options.enableHoverOutsideTarget,c=Sv(o,this.lastTargetTouchFallback);if(!c)return;if(this._isScrolling||!this.monitor.isDragging()&&xae(this._mouseClientOffset.x||0,this._mouseClientOffset.y||0,c.x,c.y,this.options.scrollAngleRanges)){this._isScrolling=!0;return}if(!this.monitor.isDragging()&&this._mouseClientOffset.hasOwnProperty("x")&&i&&wae(this._mouseClientOffset.x||0,this._mouseClientOffset.y||0,c.x,c.y)>(this.options.touchSlop?this.options.touchSlop:0)&&(this.moveStartSourceIds=void 0,this.actions.beginDrag(i,{clientOffset:this._mouseClientOffset,getSourceClientOffset:this.getSourceClientOffset,publishSource:!1})),!this.monitor.isDragging())return;const u=this.sourceNodes.get(this.monitor.getSourceId());this.installSourceNodeRemovalObserver(u),this.actions.publishDragSource(),o.cancelable&&o.preventDefault();const p=(a||[]).map(b=>this.targetNodes.get(b)).filter(b=>!!b),v=this.options.getDropTargetElementsAtPoint?this.options.getDropTargetElementsAtPoint(c.x,c.y,p):this.document.elementsFromPoint(c.x,c.y),h=[];for(const b in v){if(!v.hasOwnProperty(b))continue;let y=v[b];for(y!=null&&h.push(y);y;)y=y.parentElement,y&&h.indexOf(y)===-1&&h.push(y)}const m=h.filter(b=>p.indexOf(b)>-1).map(b=>this._getDropTargetId(b)).filter(b=>!!b).filter((b,y,w)=>w.indexOf(b)===y);if(s)for(const b in this.targetNodes){const y=this.targetNodes.get(b);if(u&&y&&y.contains(u)&&m.indexOf(b)===-1){m.unshift(b);break}}m.reverse(),this.actions.hover(m,{clientOffset:c})},this._getDropTargetId=o=>{const i=this.targetNodes.keys();let a=i.next();for(;a.done===!1;){const s=a.value;if(o===this.targetNodes.get(s))return s;a=i.next()}},this.handleTopMoveEndCapture=o=>{if(this._isScrolling=!1,this.lastTargetTouchFallback=void 0,!!Eae(o)){if(!this.monitor.isDragging()||this.monitor.didDrop()){this.moveStartSourceIds=void 0;return}o.cancelable&&o.preventDefault(),this._mouseClientOffset={},this.uninstallSourceNodeRemovalObserver(),this.actions.drop(),this.actions.endDrag()}},this.handleCancelOnEscape=o=>{o.key==="Escape"&&this.monitor.isDragging()&&(this._mouseClientOffset={},this.uninstallSourceNodeRemovalObserver(),this.actions.endDrag())},this.options=new yae(r,n),this.actions=t.getActions(),this.monitor=t.getMonitor(),this.sourceNodes=new Map,this.sourcePreviewNodes=new Map,this.sourcePreviewNodeOptions=new Map,this.targetNodes=new Map,this.listenerTypes=[],this._mouseClientOffset={},this._isScrolling=!1,this.options.enableMouseEvents&&this.listenerTypes.push(Wa.mouse),this.options.enableTouchEvents&&this.listenerTypes.push(Wa.touch),this.options.enableKeyboardEvents&&this.listenerTypes.push(Wa.keyboard)}}const Iae=function(t,n={},r={}){return new Bu(t,n,r)};var Ko=function(){return Ko=Object.assign||function(t){for(var n,r=1,o=arguments.length;rt.text?1:e.texto?i-1:i,[o,i]},Dae=function(e,t,n){var r=t<0?e.length+t:t;if(r>=0&&rr?"down":"up"},Cy=function(e,t){var n="",r=0;return e.forEach(function(o,i){var a,s=Lae(o,((a=t.getClientOffset())===null||a===void 0?void 0:a.y)||0);n===""?n=s:n!==s&&(n=s,r=i),i===e.length-1&&s==="down"&&(r=i+1)}),r},u2=function(e,t,n){var r=t.closest('[role="list"]'),o=r==null?void 0:r.querySelectorAll(':scope > [role="listitem"]');return o?Cy(o,n):null},Bae=function(e,t,n){var r=e.getBoundingClientRect(),o=n.dropTargetOffset,i=r.top+o,a=r.bottom-o;return t>a?"lower":t [role="listitem"]');return{id:r.rootId,index:Cy(i,n)}}var a=n.getItem(),s=t.querySelector('[role="list"]'),c=Bae(t,((o=n.getClientOffset())===null||o===void 0?void 0:o.y)||0,r);if(s){if(c==="upper")if(Ya(a,e.parent,r)){var u=u2(e,t,n);return u===null?null:{id:e.parent,index:u}}else return{id:e.id,index:0};var i=s.querySelectorAll(':scope > [role="listitem"]');return{id:e.id,index:Cy(i,n)}}else{if(c==="middle")return{id:e.id,index:0};if(Ya(a,e.parent,r)){var u=u2(e,t,n);return u===null?null:{id:e.parent,index:u}}return null}},Aae=function(e){return e===void 0&&(e={}),{backends:[{id:"html5",backend:rte,options:e.html5,transition:bae},{id:"touch",backend:Iae,options:e.touch||{enableMouseEvents:!0},preview:!0,transition:mae}]}},Fp=function(e,t){return e.some(function(n){return n.parent===t})},q4=d.createContext({}),zae=function(e){var t=Xae(e.tree,e.initialOpen),n=t[0],r=t[1],o=r.handleToggle,i=r.handleCloseAll,a=r.handleOpenAll,s=r.handleOpen,c=r.handleClose;d.useImperativeHandle(e.treeRef,function(){return{open:function(m){return s(m,e.onChangeOpen)},close:function(m){return c(m,e.onChangeOpen)},openAll:function(){return a(e.onChangeOpen)},closeAll:function(){return i(e.onChangeOpen)}}});var u=ua().getMonitor(),p=e.canDrop,v=e.canDrag,h=Ko(Ko({extraAcceptTypes:[],listComponent:"ul",listItemComponent:"li",placeholderComponent:"li",sort:!0,insertDroppableFirst:!0,enableAnimateExpand:!1,dropTargetOffset:0,initialOpen:!1},e),{openIds:n,onDrop:function(m,b,y){if(m){var w={dragSourceId:m.id,dropTargetId:b,dragSource:m,dropTarget:cu(e.tree,b),monitor:u},C=e.tree;if(cu(C,m.id)||(C=Xr(Xr([],C,!0),[m],!1)),e.sort===!1){var S=K4(C,m.id,b,y),E=S[1];w.destinationIndex=E,w.relativeIndex=y,e.onDrop(jae(C,m.id,b,y),w);return}e.onDrop(Nae(C,m.id,b),w)}else{var w={dropTargetId:b,dropTarget:cu(e.tree,b),monitor:u};e.sort===!1&&(w.destinationIndex=U4(e.tree,b,y),w.relativeIndex=y),e.onDrop(e.tree,w)}},canDrop:p?function(m,b){return p(e.tree,{dragSourceId:m??void 0,dropTargetId:b,dragSource:u.getItem(),dropTarget:cu(e.tree,b),monitor:u})}:void 0,canDrag:v?function(m){return v(cu(e.tree,m))}:void 0,onToggle:function(m){return o(m,e.onChangeOpen)}});return ue.createElement(q4.Provider,{value:h},e.children)},X4=d.createContext({}),Hae={isLock:!1},Fae=function(e){var t=d.useState(Hae.isLock),n=t[0],r=t[1];return ue.createElement(X4.Provider,{value:{isLock:n,lock:function(){return r(!0)},unlock:function(){return r(!1)}}},e.children)},zd=d.createContext({}),Sp={dropTargetId:void 0,index:void 0},_ae=function(e){var t=d.useState(Sp.dropTargetId),n=t[0],r=t[1],o=d.useState(Sp.index),i=o[0],a=o[1],s=function(u,p){r(u),a(p)},c=function(){r(Sp.dropTargetId),a(Sp.index)};return ue.createElement(zd.Provider,{value:{dropTargetId:n,index:i,showPlaceholder:s,hidePlaceholder:c}},e.children)},Vae=function(e){return ue.createElement(zae,Ko({},e),ue.createElement(Fae,null,ue.createElement(_ae,null,e.children)))},Wae=function(e){var t=d.useContext(X4);d.useEffect(function(){if(e.current){var n=e.current,r=function(p){var v=p.target;(v instanceof HTMLInputElement||v instanceof HTMLTextAreaElement)&&t.lock()},o=function(p){var v=p.target;(v instanceof HTMLInputElement||v instanceof HTMLTextAreaElement)&&t.unlock()},i=function(p){return r(p)},a=function(p){return o(p)},s=function(p){return r(p)},c=function(p){return o(p)},u=new MutationObserver(function(){document.activeElement===document.body&&t.unlock()});return u.observe(n,{subtree:!0,childList:!0}),n.addEventListener("mouseover",i),n.addEventListener("mouseout",a),n.addEventListener("focusin",s),n.addEventListener("focusout",c),function(){u.disconnect(),n.removeEventListener("mouseover",i),n.removeEventListener("mouseout",a),n.removeEventListener("focusin",s),n.removeEventListener("focusout",c)}}},[e,t]),d.useEffect(function(){var n;(n=e.current)===null||n===void 0||n.setAttribute("draggable",t.isLock?"false":"true")},[e,t.isLock])},Ch={TREE_ITEM:Symbol()},G4=null,Y4=function(e){var t=e.target;if(t instanceof HTMLElement){var n=t.closest('[role="listitem"]');e.currentTarget===n&&(G4=n)}},d2=function(e){return Y4(e)},f2=function(e){return Y4(e)},Uae=function(e,t){var n=Za();d.useEffect(function(){var s=t.current;return s==null||s.addEventListener("dragstart",d2),s==null||s.addEventListener("touchstart",f2,{passive:!0}),function(){s==null||s.removeEventListener("dragstart",d2),s==null||s.removeEventListener("touchstart",f2)}},[t]);var r=Dee({type:Ch.TREE_ITEM,item:function(s){var c=Ko({ref:t},e);return n.onDragStart&&n.onDragStart(c,s),c},end:function(s,c){var u=s;n.onDragEnd&&n.onDragEnd(u,c)},canDrag:function(){var s=n.canDrag;return G4!==t.current?!1:s?s(e.id):!0},collect:function(s){return{isDragging:s.isDragging()}}}),o=r[0].isDragging,i=r[1],a=r[2];return[o,i,a]},Kae=function(e){var t=Za(),n=d.useContext(zd),r=QN({accept:Xr([Ch.TREE_ITEM],t.extraAcceptTypes,!0),drop:function(c,u){var p=t.rootId,v=t.onDrop,h=n.dropTargetId,m=n.index;u.isOver({shallow:!0})&&h!==void 0&&m!==void 0&&v(B1(c)?c:null,p,m),n.hidePlaceholder()},canDrop:function(c,u){var p=t.rootId;return u.isOver({shallow:!0})?c===void 0?!1:Ya(c,p,t):!1},hover:function(c,u){if(u.isOver({shallow:!0})){var p=t.rootId,v=n.dropTargetId,h=n.index,m=n.showPlaceholder,b=n.hidePlaceholder,y=Ey(null,e.current,u,t);if(y===null||!Ya(c,p,t)){b();return}(y.id!==v||y.index!==h)&&m(y.id,y.index)}},collect:function(c){var u=c.getItem();return{isOver:c.isOver({shallow:!0})&&c.canDrop(),dragSource:u}}}),o=r[0],i=o.isOver,a=o.dragSource,s=r[1];return[i,a,s]},qae=function(e,t){var n=Za(),r=d.useContext(zd),o=QN({accept:Xr([Ch.TREE_ITEM],n.extraAcceptTypes,!0),drop:function(u,p){var v=r.dropTargetId,h=r.index;p.isOver({shallow:!0})&&v!==void 0&&h!==void 0&&n.onDrop(B1(u)?u:null,v,h),r.hidePlaceholder()},canDrop:function(u,p){if(p.isOver({shallow:!0})){var v=Ey(e,t.current,p,n);return v===null?!1:Ya(u,v.id,n)}return!1},hover:function(u,p){if(p.isOver({shallow:!0})){var v=r.dropTargetId,h=r.index,m=r.showPlaceholder,b=r.hidePlaceholder,y=Ey(e,t.current,p,n);if(y===null||!Ya(u,y.id,n)){b();return}(y.id!==v||y.index!==h)&&m(y.id,y.index)}},collect:function(u){var p=u.getItem();return{isOver:u.isOver({shallow:!0})&&u.canDrop(),dragSource:p}}}),i=o[0],a=i.isOver,s=i.dragSource,c=o[1];return[a,s,c]},Xae=function(e,t){var n=d.useMemo(function(){return t===!0?e.filter(function(v){return Fp(e,v.id)}).map(function(v){return v.id}):Array.isArray(t)?t:[]},[t]),r=d.useState(n),o=r[0],i=r[1];d.useEffect(function(){return i(n)},[t]);var a=function(v,h){var m=o.includes(v)?o.filter(function(b){return b!==v}):Xr(Xr([],o,!0),[v],!1);i(m),h&&h(m)},s=function(v){i([]),v&&v([])},c=function(v){var h=e.filter(function(m){return Fp(e,m.id)}).map(function(m){return m.id});i(h),v&&v(h)},u=function(v,h){var m=[];if(Array.isArray(v)){var b=e.filter(function(y){return v.includes(y.id)&&Fp(e,y.id)});m=Xr(Xr([],o,!0),b.map(function(y){return y.id}),!0).filter(function(y,w,C){return C.indexOf(y)===w})}else m=o.includes(v)?o:Xr(Xr([],o,!0),[v],!1);i(m),h&&h(m)},p=function(v,h){var m=o.filter(function(b){return Array.isArray(v)?!v.includes(b):b!==v});i(m),h&&h(m)};return[o,{handleToggle:a,handleCloseAll:s,handleOpenAll:c,handleOpen:u,handleClose:p}]},Gae=function(){return jee(function(e){var t=e.getItemType();return{item:e.getItem(),clientOffset:e.getClientOffset(),isDragging:e.isDragging()&&t===Ch.TREE_ITEM}})},Za=function(){var e=d.useContext(q4);if(!e)throw new Error("useTreeContext must be used under TreeProvider");return e},Yae=function(e,t){var n=Za(),r=n.rootId,o=n.rootProps,i=n.classes,a=(i==null?void 0:i.container)||"";return t&&(i!=null&&i.dropTarget)&&(a="".concat(a," ").concat(i.dropTarget)),e===r&&(i!=null&&i.root)&&(a="".concat(a," ").concat(i.root)),e===r&&(o!=null&&o.className)&&(a="".concat(a," ").concat(o.className)),a=a.trim(),a},Qae=function(e,t,n){t.current?n(t):n(e),d.useEffect(function(){t.current?n(t):n(e)},[t.current])},Zae=function(e){var t=Za(),n=d.useContext(zd),r=d.useRef(null),o=d.useRef(null),i=t.tree.find(function(M){return M.id===e.id}),a=t.openIds,s=t.classes,c=t.enableAnimateExpand,u=a.includes(e.id),p=Uae(i,r),v=p[0],h=p[1],m=p[2],b=qae(i,r),y=b[0],w=b[1],C=b[2];Qae(r,o,h),Ya(w,e.id,t)&&C(r),d.useEffect(function(){t.dragPreviewRender?m(nte(),{captureDraggingState:!0}):o.current&&m(r)},[m,t.dragPreviewRender]),Wae(r);var S=function(){return t.onToggle(i.id)},E=t.listItemComponent,k=(s==null?void 0:s.listItem)||"";y&&(s!=null&&s.dropTarget)&&(k="".concat(k," ").concat(s.dropTarget)),v&&(s!=null&&s.draggingSource)&&(k="".concat(k," ").concat(s.draggingSource));var O=t.canDrag?t.canDrag(e.id):!0,$=n.dropTargetId===e.id,T={depth:e.depth,isOpen:u,isDragging:v,isDropTarget:$,draggable:O,hasChild:Fp(t.tree,e.id),containerRef:r,handleRef:o,onToggle:S};return ue.createElement(E,{ref:r,className:k,role:"listitem"},t.render(i,T),c&&T.hasChild&&ue.createElement(Tae,{isVisible:u},ue.createElement(ky,{parentId:e.id,depth:e.depth+1})),!c&&T.hasChild&&u&&ue.createElement(ky,{parentId:e.id,depth:e.depth+1}))},p2=function(e){var t=Za(),n=t.placeholderRender,r=t.placeholderComponent,o=t.classes,i=d.useContext(zd),a=ua(),s=a.getMonitor(),c=s.getItem();if(!n||!c)return null;var u=e.dropTargetId===i.dropTargetId&&(e.index===i.index||e.index===void 0&&e.listCount===i.index);return u?ue.createElement(r,{className:(o==null?void 0:o.placeholder)||""},n(c,{depth:e.depth})):null},ky=function(e){var t=Za(),n=d.useRef(null),r=t.tree.filter(function(y){return y.parent===e.parentId}),o=r,i=typeof t.sort=="function"?t.sort:Mae;if(t.insertDroppableFirst){var a=r.filter(function(y){return y.droppable}),s=r.filter(function(y){return!y.droppable});t.sort===!1?o=Xr(Xr([],a,!0),s,!0):(a=a.sort(i),s=s.sort(i),o=Xr(Xr([],a,!0),s,!0))}else t.sort!==!1&&(o=r.sort(i));var c=Kae(n),u=c[0],p=c[1],v=c[2];e.parentId===t.rootId&&Ya(p,t.rootId,t)&&v(n);var h=Yae(e.parentId,u),m=t.rootProps||{},b=t.listComponent;return ue.createElement(b,Ko({ref:n,role:"list"},m,{className:h}),o.map(function(y,w){return ue.createElement(ue.Fragment,{key:y.id},ue.createElement(p2,{depth:e.depth,listCount:o.length,dropTargetId:e.parentId,index:w}),ue.createElement(Zae,{id:y.id,depth:e.depth}))}),ue.createElement(p2,{depth:e.depth,listCount:o.length,dropTargetId:e.parentId}))},Jae={height:"100%",left:0,pointerEvents:"none",position:"fixed",top:0,width:"100%",zIndex:100},ese=function(e){var t=e.clientOffset;if(!t)return{};var n=t.x,r=t.y,o="translate(".concat(n,"px, ").concat(r,"px)");return{pointerEvents:"none",transform:o}},tse=function(){var e=Za(),t=Gae(),n=t.isDragging,r=t.clientOffset;return!n||!r?null:ue.createElement("div",{style:Jae},ue.createElement("div",{style:ese(t)},e.dragPreviewRender&&e.dragPreviewRender(t)))};function nse(e,t){return ue.createElement(Vae,Ko({},e,{treeRef:t}),e.dragPreviewRender&&ue.createElement(tse,null),ue.createElement(ky,{parentId:e.rootId,depth:0}))}var rse=d.forwardRef(nse);const Q4=e=>{if(e.droppable&&e.fileType!=="workspace")return it.jsx(eN,{size:24});switch(e.fileType){case"image":return it.jsx(Lq,{size:24});case"csv":return it.jsx(mX,{size:24});case"text":return it.jsx(iU,{size:24});case"workspace":return it.jsx(qq,{size:24});default:return null}},ose="_root_1cexp_1",ise="_isSelected_1cexp_15",ase="_expandIconWrapper_1cexp_20",sse="_isOpen_1cexp_32",lse="_labelGridItem_1cexp_36",cse="_pipeY_1cexp_42",use="_pipeX_1cexp_49",uu={root:ose,isSelected:ise,expandIconWrapper:ase,isOpen:sse,labelGridItem:lse,pipeY:cse,pipeX:use},dse=24,fse=e=>{const{droppable:t,data:n}=e.node,r=e.depth*dse,{Paragraph:o}=Rd,i=u=>{u.stopPropagation(),e.onToggle(e.node.id)},a=()=>e.onSelect(e.node),s=u=>{u.stopPropagation();const p=e.plugin.app.workspace.getLeafById(String(e.node.id));p&&(e.plugin.app.workspace.setActiveLeaf(p),a())},c=u=>{u.stopPropagation(),u.preventDefault();const p=new ke.Menu;p.addItem(v=>{v.setTitle("Not Ready Yet").setIcon("surfing").onClick(()=>{new ke.Notice("Not Ready Yet")})}),p.showAtPosition({x:u.clientX,y:u.clientY})};return it.jsxs("div",{className:`tree-node ${uu.root} ${e.isSelected?uu.isSelected:""}`,style:{paddingInlineStart:r},onClick:s,onContextMenu:c,children:[it.jsx("div",{className:`${uu.expandIconWrapper} ${e.isOpen?uu.isOpen:""}`,children:e.hasChild&&it.jsx(sq,{onClick:i,size:24})}),it.jsx("div",{children:it.jsx(Q4,{droppable:t||!1,fileType:n==null?void 0:n.fileType})}),it.jsx("div",{className:uu.labelGridItem,children:it.jsx(o,{ellipsis:{rows:1,expandable:!1,tooltip:!1},style:{marginBottom:0},children:`${e.node.text}`})})]})},pse="_root_1k2x2_1",vse={root:pse},hse=e=>{const t=e.depth*24;return it.jsx("div",{className:vse.root,style:{left:t}})},gse="_app_byfxz_1",mse="_container_byfxz_7",bse="_treeRoot_byfxz_11",yse="_draggingSource_byfxz_15",wse="_placeholderContainer_byfxz_19",xse="_dropTarget_byfxz_23",xl={app:gse,container:mse,treeRoot:bse,draggingSource:yse,placeholderContainer:wse,dropTarget:xse},Sse="_root_wotdi_1",Cse="_icon_wotdi_16",Ese="_label_wotdi_17",O0={root:Sse,icon:Cse,label:Ese},kse=e=>{var n;const t=e.monitorProps.item;return it.jsxs("div",{className:O0.root,children:[it.jsx("div",{className:O0.icon,children:it.jsx(Q4,{droppable:t.droppable||!1,fileType:(n=t==null?void 0:t.data)==null?void 0:n.fileType})}),it.jsx("div",{className:O0.label,children:t.text})]})};class Ose extends ke.Modal{constructor(n,r,o){super(n);Ce(this,"plugin");Ce(this,"result");Ce(this,"onSubmit");this.onSubmit=o}onOpen(){var r;const{contentEl:n}=this;(r=n.parentElement)==null||r.classList.add("wb-workspace-modal"),n.createEl("h2",{text:"Workspace Name"}),new ke.Setting(n).setName("Name").addText(o=>o.onChange(i=>{this.result=i})),new ke.Setting(n).addButton(o=>o.setButtonText("Submit").setCta().onClick(()=>{this.close(),this.onSubmit(this.result)}))}onClose(){const{contentEl:n}=this;n.empty()}}const $se=e=>{const t=[];for(let n=0;n{const[e,t]=d.useState(),n=ue.useCallback(o=>t(o),[]);d.useEffect(()=>{const o=document.body.find('div[data-type="surfing-tab-tree"]');t(o)},[]);const r=ue.useMemo(()=>({rootElement:e}),[e]);return{dndArea:e,handleRef:n,html5Options:r}};function Tse(e){const[t,n]=d.useState(e.plugin.settings.treeData||[]),r=m=>n(m),{dndArea:o,handleRef:i,html5Options:a}=Ise(),[s,c]=d.useState(null),u=m=>{c(m)},p=m=>{const b=new ke.Menu;b.addItem(y=>{y.setTitle("New Group").setIcon("folder").onClick(()=>{new Ose(e.plugin.app,e.plugin,w=>{const C={id:$se(16),parent:0,droppable:!0,text:w,data:{fileType:"workspace",fileSize:"",icon:"folder"}};n([...t,C])}).open()})}),b.showAtPosition({x:m.clientX,y:m.clientY})};ue.useEffect(()=>{const m=t.findIndex(w=>{var C;return((C=w.data)==null?void 0:C.fileType)==="site"});if(m===-1)return;const b=String(t[m].id);if(!e.plugin.app.workspace.getLeafById(b)){n([]),e.plugin.settings.treeData=[],e.plugin.settingsTab.applySettingsUpdate();return}t&&(e.plugin.settings.treeData=t,e.plugin.settingsTab.applySettingsUpdate())},[t]),ue.useEffect(()=>{if(t.length>0)return;const m=e.plugin.app.workspace.getLeavesOfType(Nr),b=m.map(y=>({id:y.id,parent:0,droppable:!0,text:y.view.currentTitle,data:{fileType:"site",fileSize:"",icon:y.view.favicon}}));return n([...t,...b]),()=>{m.forEach(y=>{h(y.id)&&(n([...t,{id:y.id,parent:0,droppable:!0,text:y.view.currentTitle,data:{fileType:"site",fileSize:"",icon:y.view.favicon}}]),y.view.webviewEl.addEventListener("dom-ready",()=>{v(y.view,"")}))})}},[]);const v=ue.useCallback((m,b)=>{n(y=>{const w=y.findIndex(C=>C.id===m.leaf.id);if(w!==-1){const C=y[w];return[...y.slice(0,w),{...C,text:b||m.currentTitle,data:{...C.data,icon:m.favicon}},...y.slice(w+1)]}else return[...y,{id:m.leaf.id,parent:0,droppable:!0,text:b||m.currentTitle,data:{fileType:"site",fileSize:"",icon:m.favicon}}]})},[]),h=m=>!t.some(b=>b.id===m);return d.useEffect(()=>{e.plugin.app.workspace.on("surfing:page-change",(m,b)=>{h(b.leaf.id)&&v(b,m)}),e.plugin.app.workspace.on("layout-change",()=>{if(e.plugin.app.workspace.getLeavesOfType(Nr).length===0){n([]);return}})},[]),t.length>0?it.jsx("div",{ref:i,className:xl.container,children:it.jsx(dee,{backend:gae,options:Aae({html5:{rootElement:o}}),children:it.jsx("div",{className:xl.app,onContextMenu:p,children:it.jsx(rse,{tree:t,rootId:0,render:(m,{depth:b,isOpen:y,hasChild:w,onToggle:C})=>it.jsx(fse,{plugin:e.plugin,node:m,depth:b,isOpen:y,onToggle:C,hasChild:w,isSelected:m.id===(s==null?void 0:s.id),onSelect:u}),dragPreviewRender:m=>it.jsx(kse,{monitorProps:m}),onDrop:r,classes:{root:xl.treeRoot,draggingSource:xl.draggingSource,placeholder:xl.placeholderContainer},sort:!1,insertDroppableFirst:!1,canDrop:(m,{dragSource:b,dropTargetId:y})=>{if((b==null?void 0:b.parent)===y)return!0},dropTargetOffset:10,placeholderRender:(m,{depth:b})=>it.jsx(hse,{node:m,depth:b})})})})}):it.jsx("div",{className:`${xl.app} tab-tree-empty-container`,children:it.jsxs("div",{className:"tab-tree-empty-state",children:[it.jsx(eN,{size:64,width:64,height:64}),it.jsx("span",{children:"No surfing tabs open"})]})})}const gu="surfing-tab-tree";class Pse extends ke.ItemView{constructor(t,n){super(t),this.plugin=n,this.plugin=n}getViewType(){return gu}getDisplayText(){return"Surfing Tab Tree"}getIcon(){return"chrome"}async onOpen(){kv.createRoot(this.containerEl).render(it.jsx(ue.StrictMode,{children:it.jsx(Tse,{plugin:this.plugin})}))}}class Mse extends ke.Component{constructor(n,r){super();Ce(this,"plugin");Ce(this,"parent");Ce(this,"listEl");this.plugin=n,this.parent=r}onload(){this.listEl=this.parent.createEl("div",{cls:"wb-last-opened-files"});const n=this.plugin.app.workspace.getLastOpenFiles().slice(0,8);for(const r of n){const o=this.listEl.createEl("button",{cls:"wb-last-opened-file"}),i=o.createEl("span",{cls:"wb-last-opened-file-icon"});ke.setIcon(i,"file-text"),o.createEl("span",{cls:"wb-last-opened-file-name",text:r}),o.onclick=async()=>{await this.plugin.app.workspace.openLinkText(r,r,!1)}}}onunload(){super.onunload(),this.listEl.empty(),this.listEl.detach()}}class Nse extends ke.Plugin{constructor(){super(...arguments);Ce(this,"settings");Ce(this,"settingsTab");Ce(this,"onLayoutChangeEventRef");Ce(this,"applyURLDebounceTimer",0);Ce(this,"urlOpened",!1)}async onload(){await this.loadSettings(),this.checkWebBrowser(),this.settingsTab=new lL(this.app,this),this.addSettingTab(this.settingsTab),this.registerView(Nr,n=>new on(n,this)),this.registerView(Qb,n=>new _Z(n,this)),this.settings.enableTreeView&&this.registerView(gu,n=>new Pse(n,this)),this.settings.bookmarkManager.openBookMark&&this.registerView(Mi,n=>new BZ(n,this));try{this.settings.enableHtmlPreview&&this.registerExtensions(Yb,Qb)}catch{new ke.Notice(`File extensions ${Yb} had been registered by other plugin!`)}this.openTabTreeView(),this.updateEmptyLeaves(!1),this.registerContextMenu(),this.registerCustomURI(),this.registerCodeBlock(),this.registerHoverPopover(),this.patchMarkdownView(),this.patchWindowOpen(),this.patchMarkdownView(),ke.requireApiVersion("1.0.4")&&this.patchEditMode(),this.onLayoutChangeEventRef=this.app.workspace.on("layout-change",()=>{const n=this.app.workspace.getActiveViewOfType(ke.ItemView);n&&this.addHeaderAndSearchBar(n)}),this.registerCommands(),this.registerCustomIcon(),this.patchEmptyView(),this.patchMarkdownPreviewRenderer(),this.patchProperty(),ke.requireApiVersion("1.1.0")&&this.settings.useWebview&&(this.patchCanvasNode(),this.patchCanvas()),this.registerEmbededHTML(),this.settings.bookmarkManager.openBookMark&&this.registerRibbon()}onunload(){this.app.workspace.detachLeavesOfType(Nr),this.app.workspace.detachLeavesOfType(Mi),this.settings.enableTreeView&&this.app.workspace.detachLeavesOfType(gu),this.app.workspace.offref(this.onLayoutChangeEventRef),this.updateEmptyLeaves(!0),this.unRegisterEmbededHTML(),ke.requireApiVersion("1.1.0")&&this.settings.useWebview&&this.refreshAllRelatedView()}openTabTreeView(){this.app.workspace.onLayoutReady(this.onLayoutReady.bind(this))}onLayoutReady(){var n;this.settings.enableTreeView&&(this.app.workspace.getLeavesOfType(gu).length||(n=this.app.workspace.getLeftLeaf(!1))==null||n.setViewState({type:gu}))}registerRibbon(){this.addRibbonIcon("bookmark",Mi,async()=>{const n=this.app.workspace;n.detachLeavesOfType(Mi),await n.getLeaf(!1).setViewState({type:Mi}),n.revealLeaf(n.getLeavesOfType(Mi)[0])})}addHeaderAndSearchBar(n){if(!n||n.getViewType()!="empty"&&n.getViewType()!=="home-tab-view")return;if(!n.headerEl.children[2].hasClass("web-browser-header-bar")){const o=new Gb(n.titleContainerEl,this,n);o.onLoad(),this.settings.showSearchBarInPage||o.focus(),o.addOnSearchBarEnterListener(i=>{on.spawnWebBrowserView(this,!1,{url:i})})}if(this.app.plugins.getPlugin("home-tab"))return;this.settings.randomBackground&&n.contentEl.toggleClass("wb-random-background",!0);const r=n.contentEl.children[0].hasClass("empty-state")?n.contentEl.children[0]:null;if(r&&!r.hasClass("wb-page-search-bar")&&this.settings.showSearchBarInPage){const o=r.createEl("div",{cls:"wb-search-bar-container"});r==null||r.addClass("wb-page-search-bar");const i=new UZ(o,n,this);if(this.settings.lastOpenedFiles&&new Mse(this,i.inPageSearchBarContainerEl).onload(),this.settings.useIconList){new KZ(r,n,this);const a=r.querySelector(".empty-state-container");a&&a.addClass("wb-empty-actions")}i.focus(),i.addOnSearchBarEnterListener(a=>{a.trim()===""||this.settings.showOtherSearchEngines||on.spawnWebBrowserView(this,!1,{url:a})})}}removeHeaderAndSearchBar(n){var r,o,i,a,s;n&&(n.getViewType()!="empty"&&n.getViewType()!=="home-tab-view"||(n.titleContainerEl.hasClass("wb-header-bar")&&(n.titleContainerEl.empty(),n.titleContainerEl.removeClass("wb-header-bar")),(r=n.contentEl.children[1])!=null&&r.hasClass("surfing-settings-icon")&&((o=n.contentEl.children[1])==null||o.detach()),!this.app.plugins.getPlugin("home-tab")&&n.contentEl.children[0].hasClass("wb-page-search-bar")&&this.settings.showSearchBarInPage&&((i=n.contentEl.children[0].children[1])==null||i.detach(),(a=n.contentEl.children[0].children[1])==null||a.empty(),(s=n.contentEl.children[0].children[1])==null||s.detach(),n.contentEl.children[0].removeClass("wb-page-search-bar"))))}updateEmptyLeaves(n){const r=this.app.workspace.getLeavesOfType("empty"),o=this.app.workspace.getLeavesOfType("home-tab-view");[...r,...o].forEach(a=>{a.view instanceof ke.ItemView&&(n||this.addHeaderAndSearchBar(a.view),n&&this.removeHeaderAndSearchBar(a.view))})}registerCustomURI(){this.settings.openInObsidianWeb&&this.registerObsidianProtocolHandler("web-open",async n=>{let r=n.url;r&&(decodeURI(r)!==r&&(r=decodeURI(r).toString().replace(/\s/g,"%20")),this.settings.bookmarkManager.saveBookMark?new Rse(this.app,r,this).open():on.spawnWebBrowserView(this,!0,{url:r}))})}registerContextMenu(){this.registerEvent(this.app.workspace.on("editor-menu",(n,r,o)=>{if(!r)return;if(r.getSelection().length===0){const a=r.getClickableTokenAt(r.getCursor());a&&a.type==="external-link"&&(n.addItem(s=>{s.setIcon("surfing").setTitle(je("Open With Surfing")).onClick(()=>{on.spawnWebBrowserView(this,!0,{url:a.text})})}).addItem(s=>{s.setIcon("surfing").setTitle(je("Open With External Browser")).onClick(()=>{window.open(a.text,"_blank","external")})}),this.app.plugins.getPlugin("obsidian-hover-editor")&&n.addItem(s=>{s.setIcon("surfing").setTitle("Open With Hover Editor").onClick(async()=>{var p;if(!this.app.plugins.getPlugin("obsidian-hover-editor")){new ke.Notice("Please install obsidian-hover-editor plugin first");return}const u=await((p=this.app.plugins.getPlugin("obsidian-hover-editor"))==null?void 0:p.spawnPopover(o.contentEl));u&&u.setViewState({type:"surfing-view",active:!0,state:{url:a.text}})})}));return}const i=r.getSelection();n.addItem(a=>{const s=[...Ri,...this.settings.customSearchEngine],c=a.setTitle("Search In Surfing").setIcon("search").setSubmenu();s.forEach(u=>{c.addItem(p=>{p.setIcon("search").setTitle(u.name).onClick(()=>{on.spawnWebBrowserView(this,!0,{url:u.url+i})})})})})}))}registerCommands(){this.addCommand({id:"open-surfing-view-next",name:"open surfing view next",callback:()=>{this.app.workspace.getLeaf(!0).setViewState({type:"surfing-view-next",active:!0})}}),this.addCommand({id:"clear-browsing-data",name:"Clear browsing data",callback:()=>{remote.session.fromPartition(`persist:surfing-vault-${this.app.appId}`).clearStorageData({storages:["appcache","cookies","filesystem","indexdb","localstorage","shadercache","websql","serviceworkers","cachestorage"]}).then(()=>{new ke.Notice("Browsing data cleared")})}}),this.addCommand({id:"open-current-url-with-external-browser",name:je("Open Current URL In External Browser"),checkCallback:r=>{var i;const o=this.app.workspace.getActiveViewOfType(on);if(o)return r||window.open((i=o.getState())==null?void 0:i.url,"_blank"),!0}}),this.addCommand({id:"clear-current-page-history",name:je("Clear Current Page History"),checkCallback:r=>{const o=this.app.workspace.getActiveViewOfType(on);if(o)return r||o.clearHistory(),!0}}),this.addCommand({id:"open-inspecter",name:"Open Inspecter",checkCallback:r=>{const o=this.app.workspace.getActiveViewOfType(on);if(o)return r||o.openInpecter(),!0}}),this.addCommand({id:"refresh-page",name:je("Refresh Current Page"),checkCallback:r=>{const o=this.app.workspace.getActiveViewOfType(on);if(o)return r||o.refresh(),!0}}),this.addCommand({id:"toggle-same-tab-globally",name:je("Toggle Same Tab In Web Browser"),callback:async()=>{this.settings.openInSameTab=!this.settings.openInSameTab,await this.saveSettings()}}),this.addCommand({id:"get-current-timestamp",name:je("Get Current Timestamp from Web Browser"),editorCallback:(r,o)=>{var u;const i=this.app.workspace.getLeavesOfType("surfing-view");if(i.length===0)return;const s=i.sort((p,v)=>v.activeTime-p.activeTime)[0].view,c=(u=s.getState())==null?void 0:u.url;c!=null&&c.contains("bilibili")&&s.getCurrentTimestamp(r)}}),this.addCommand({id:"search-in-current-page-title-bar",name:je("Search In Current Page Title Bar"),callback:()=>{const r=this.app.workspace.getActiveViewOfType(ke.MarkdownView);if(!r||r.headerEl.childNodes.length>4)return;const o=new Gb(r.headerEl,this,r,!1);o.onLoad(),o.addOnSearchBarEnterListener(i=>{on.spawnWebBrowserView(this,!1,{url:i})}),o.focus()}}),[...Ri,...this.settings.customSearchEngine].forEach(r=>{this.addCommand({id:"using"+r.name.replace(/\s/g,"-")+"-to-search",name:je("Using ")+r.name+je(" to search"),editorCallback:(o,i)=>{if(o.getSelection().length===0)return;const a=o.getSelection();on.spawnWebBrowserView(!0,{url:r.url+a})}})}),this.addCommand({id:"toggle-dark-mode",name:je("Toggle Dark Mode"),callback:async()=>{this.settings.darkMode=!this.settings.darkMode,await this.saveSettings();const r=this.app.workspace.getActiveViewOfType(on);r&&r.refresh()}}),this.addCommand({id:"focus-on-current-search-bar",name:je("Focus On Current Search Bar"),checkCallback:r=>{const o=this.app.workspace.getActiveViewOfType(on);if(o){if(!r){const i=o.headerBar;i&&i.focus()}return!0}}}),this.addCommand({id:"copy-link-to-highlight",name:je("Copy Link to Highlight"),checkCallback:r=>{const o=this.app.workspace.getActiveViewOfType(on);if(o)return r||o.copyHighLight(),!0}}),this.addCommand({id:"copy-surfing-tabs-as-markdown",name:"Copy Surfing Tabs as Markdown",callback:()=>{const r=this.app.workspace.getLeavesOfType("surfing-view");if(r.length===0)return;r.sort((i,a)=>a.activeTime-i.activeTime);let o="";r.forEach(i=>{const a=i.view,s=a.currentUrl;if(!s)return;const c=a.currentTitle;c&&(o.length===0?o=`- [${c}](<${s}>)`:o+=` -- [${c}](<${s}>)`)});try{navigator.clipboard.writeText(o)}catch{new ke.Notice(je("Copy failed, you may focus on surfing view, click the title bar, and try again."))}}})}registerCustomIcon(){ke.addIcon("surfing",'')}registerCodeBlock(){this.registerMarkdownCodeBlockProcessor("surfing",(n,r,o)=>{const i=ke.parseYaml(n);if(!i||!i.url)return;const a=i.url;r.toggleClass("surfing-embed-website",!0);const s=r.createEl("div",{cls:"surfing-embed-website-container"});new Xb(s,a,this).onload()})}registerHoverPopover(){this.settings.hoverPopover&&(this.registerEditorExtension(TD.EditorView.domEventHandlers({mouseover:(n,r)=>{var u;if(!n.target.hasClass("cm-underline")&&!n.target.hasClass("external-link"))return;const o=r.state.field(ke.editorInfoField),i=(u=o.editMode)==null?void 0:u.editor,a=r.posAtDOM(n.target),s=i.offsetToPos(a),c=i.getClickableTokenAt(s);c&&c.text.trim().startsWith("http")&&this.app.workspace.trigger("hover-link",{event:n,source:"editor",hoverParent:o,targetEl:n.target,linktext:c.text.trim()})}})),this.registerMarkdownPostProcessor((n,r)=>{n.querySelectorAll("a").forEach(o=>{o.addEventListener("mouseover",i=>{o.hasClass("external-link")&&(!o.href||!o.href.trim().startsWith("http")||this.app.workspace.trigger("hover-link",{event:i,source:"preview",hoverParent:r,targetEl:o,linktext:o.href.trim()}))})})}),this.app.workspace.onLayoutReady(()=>{const n=this.app.internalPlugins.plugins["page-preview"],r=this;this.register(Yi(n.instance,{onLinkHover(o){return function(i,a,s,c,u,...p){if(s.startsWith("http://")||s.startsWith("https://")){let{hoverPopover:v}=i;if(v&&v.state!==ke.PopoverState.Hidden&&v.targetEl===a)return;v=new ke.HoverPopover(i,a),v.hoverEl.addClass("surfing-hover-popover"),setTimeout(()=>{if(v.state!==ke.PopoverState.Hidden){const h=v.hoverEl.createDiv("surfing-hover-popover-container");new Xb(h,s,r).onload()}},100);return}return o.call(this,i,a,s,c,u,...p)}}})),n.enabled&&(n.disable(),n.enable(),this.register(()=>{n.enabled&&(n.disable(),n.enable())}))}))}checkWebBrowser(){this.app.plugins.getPlugin("obsidian-web-browser")&&new ke.Notice(je("You enabled obsidian-web-browser plugin, please disable it/disable surfing to avoid conflict."),4e3),this.app.vault.getConfig("showViewHeader")||new ke.Notice(je("You didn't enable show tab title bar in apperance settings, please enable it to use surfing happily."),4e3)}patchMarkdownView(){const n=this;this.register(Yi(ke.MarkdownView.prototype,{triggerClickableToken:r=>function(o,i,...a){if(o.type==="external-link"){if(i==="tab"||i==="window"){window.open(o.text,"_blank","external");return}const s=o.text!==decodeURI(o.text)?decodeURI(o.text):o.text;sp(s)?on.spawnWebBrowserView(n,!0,{url:s}):window.open(s,"_blank","external");return}return r.call(this,o,i,...a)}}))}patchEditMode(){const n=async r=>{const o=r.app.workspace.getLeavesOfType("markdown")[0],i=o==null?void 0:o.view;if(ke.requireApiVersion("1.7.3")&&(o!=null&&o.isDeferred)&&await o.loadIfDeferred(),!o||!i)return!1;const a=i.editMode??i.sourceMode;if(!a)return!1;const s=a.constructor;return this.register(Yi(s.prototype,{triggerClickableToken:c=>function(u,p,...v){if(u.type==="external-link"){if(p==="tab"||p==="window"){window.open(u.text,"_blank","external");return}const h=u.text!==decodeURI(u.text)?decodeURI(u.text):u.text;sp(h)?on.spawnWebBrowserView(r,!0,{url:h}):window.open(h,"_blank","external");return}return c.call(this,u,p,...v)}})),!0};this.app.workspace.onLayoutReady(async()=>{if(!await n(this)){const r=this.app.workspace.on("layout-change",async()=>{await n(this)&&this.app.workspace.offref(r)});this.registerEvent(r)}})}patchWindowOpen(){const n=this,r=()=>{clearTimeout(this.applyURLDebounceTimer),this.urlOpened=!0,this.applyURLDebounceTimer=window.setTimeout(()=>{this.urlOpened=!1},300)},o=()=>this.urlOpened,i=Yi(window,{open:a=>function(s,c,u){let p="";return typeof s=="string"?p=s:s instanceof URL&&(p=s.toString()),decodeURI(p)!==p&&(p=decodeURI(p).toString().replace(/\s/g,"%20")),p==="about:blank"&&u||!sp(p)||p!=="about:blank"&&(c==="_blank"||c==="_self")||u==="external"?a(s,c,u):(p&&!c&&!u&&!o()&&(on.spawnWebBrowserView(n,!0,{url:p}),r()),null)}});this.register(i)}patchMarkdownPreviewRenderer(){const n=this,r=()=>{clearTimeout(this.applyURLDebounceTimer),this.urlOpened=!0,this.applyURLDebounceTimer=window.setTimeout(()=>{this.urlOpened=!1},300)},o=()=>this.urlOpened,i=Yi(ke.MarkdownPreviewRenderer,{registerDomEvents:a=>function(s,c,...u){return s==null||s.on("click",".external-link",(p,v)=>{if(p.preventDefault(),v){const h=v.getAttribute("href");if(h){if(p.ctrlKey||p.metaKey){window.open(h,"_blank","external"),r();return}sp(h)&&!o()?(on.spawnWebBrowserView(n,!0,{url:h}),r()):window.open(h,"_blank","external");return}}}),a.call(this,s,c,...u)}});this.register(i)}patchProperty(){if(!ke.requireApiVersion("1.4.0"))return;const n=r=>{var c,u;const o=r.app.workspace.activeEditor,i=(c=o==null?void 0:o.metadataEditor)==null?void 0:c.rendered.filter(p=>p.entry.type==="text");if(!(i!=null&&i.length))return!1;const a=i[0];if(!a)return!1;const s=a.rendered;return s!=null&&s.constructor?(this.register(Yi(s.constructor.prototype,{render:p=>async function(...v){var b;p.apply(this,...v);const h=this.linkTextEl,m=h.cloneNode(!0);(b=h.parentNode)==null||b.replaceChild(m,h),m.onclick=y=>{if(!(y.button!==0&&y.button!==1))if(y.preventDefault(),this.isWikilink())this.ctx.app.workspace.openLinkText(this.getLinkText(),this.ctx.sourcePath,ke.Keymap.isModEvent(y),{active:!0});else if(PO(this.value)){if(ke.Keymap.isModEvent(y)){window.open(this.value,"_blank");return}on.spawnWebBrowserView(r,!0,{url:this.value});return}else MO(this.value)&&window.open("mailto:"+this.value,"_blank")},m.oncontextmenu=y=>{y.preventDefault();const w=new ke.Menu().addSections(["title","correction","spellcheck","open","selection","clipboard","action","view","info","","danger"]);this.isWikilink()?(y.preventDefault(),this.ctx.app.workspace.handleLinkContextMenu(w,this.getLinkText(),this.ctx.sourcePath)):PO(this.value)?(y.preventDefault(),this.ctx.app.workspace.handleExternalLinkContextMenu(w,this.value)):MO(this.value)&&(y.preventDefault(),this.ctx.app.workspace.handleExternalLinkContextMenu(w,"mailto:"+this.value)),w.showAtMouseEvent(y)}}})),(u=o==null?void 0:o.leaf)==null||u.rebuildView(),!0):!1};this.app.workspace.onLayoutReady(()=>{if(!n(this)){const r=this.app.workspace.on("layout-change",()=>{n(this)&&this.app.workspace.offref(r)});this.registerEvent(r)}})}patchEmptyView(){const n=()=>{const r=this.app.workspace.getLeavesOfType("empty").first(),o=r==null?void 0:r.view,i=this;if(!o)return!1;const a=o.constructor;return this.register(Yi(a.prototype,{onOpen:s=>function(...c){const u=i.app.plugins.getPlugin("surfing").settings;if(!this.contentEl.querySelector(".wb-bookmark-bar")&&u.bookmarkManager.openBookMark&&(this.contentEl.classList.add("mod-wb-bookmark-bar"),new AN(this,this.plugin).onload()),!this.contentEl.querySelector(".surfing-settings-icon")){const p=this.contentEl.createDiv({cls:"surfing-settings-icon"});p.addEventListener("click",()=>{i.app.setting.open(),i.app.setting.openTabById("surfing")}),ke.setIcon(p,"settings")}return s.call(this,...c)}})),r==null||r.rebuildView(),!0};this.app.workspace.onLayoutReady(()=>{if(!n()){const r=this.app.workspace.on("layout-change",()=>{n()&&this.app.workspace.offref(r)});this.registerEvent(r)}})}patchCanvasNode(){const n=r=>{var u;const o=(u=r.app.workspace.getLeavesOfType("canvas").first())==null?void 0:u.view,i=this;if(!o)return!1;const s=(p=>{for(const[,v]of p)if(v.url!==void 0)return v;return!1})(o.canvas.nodes);if(!s)return!1;const c=Yi(s==null?void 0:s.constructor.prototype,{render(p){return function(){p.call(this),!this.canvas.isDragging&&new XZ(this,i,"canvas",this==null?void 0:this.canvas).onload()}}});return this.register(c),s.render(),!0};this.app.workspace.onLayoutReady(()=>{if(!n(this)){const r=this.app.workspace.on("layout-change",()=>{n(this)&&this.app.workspace.offref(r)});this.registerEvent(r)}})}patchCanvas(){const n=r=>{var s;const o=(s=r.app.workspace.getLeavesOfType("canvas").first())==null?void 0:s.view;if(!o)return!1;const i=o.canvas.constructor,a=Yi(i.prototype,{selectOnly:c=>function(u){c.call(this,u),u.contentEl&&u.url!==void 0&&!u.contentEl.classList.contains("wb-view-content")&&setTimeout(()=>{u.render()},0)}});return this.register(a),!0};this.app.workspace.onLayoutReady(()=>{if(!n(this)){const r=this.app.workspace.on("layout-change",()=>{n(this)&&this.app.workspace.offref(r)});this.registerEvent(r)}})}refreshAllRelatedView(){for(const n of this.app.workspace.getLeavesOfType("canvas"))n&&n.rebuildView()}registerEmbededHTML(){const n=this;this.app.embedRegistry.registerExtension("html",(r,o,i)=>new NO(r,o,n.app,n)),this.app.embedRegistry.registerExtension("htm",(r,o,i)=>new NO(r,o,n.app,n))}unRegisterEmbededHTML(){this.app.embedRegistry.unregisterExtension("html"),this.app.embedRegistry.unregisterExtension("htm")}async loadSettings(){this.settings=Object.assign({},zo,await this.loadData())}async saveSettings(){await this.saveData(this.settings)}}class Rse extends ke.Modal{constructor(n,r,o){super(n);Ce(this,"url");Ce(this,"plugin");this.url=r,this.app=n,this.plugin=o}onOpen(){var a;const{contentEl:n}=this;(a=n.parentElement)==null||a.classList.add("wb-bookmark-modal"),n.createEl("h2",{text:"Save Bookmark"});const r=n.createDiv({cls:"wb-bookmark-modal-btn-container"}),o=r.createEl("button",{text:"Save"});o.onclick=async()=>{this.close();const s=await BN(this.url);if(!s)return;const c=await Ua(this.plugin),u=c.bookmarks,p=this.plugin.settings.bookmarkManager.defaultCategory.split(",").map(v=>v.trim());u.unshift({id:String(Dd(this.url)),name:s.title||"Untitled",url:this.url,description:s.description||"",category:p.length>0?p:["ROOT"],tags:"",created:ke.moment().valueOf(),modified:ke.moment().valueOf()}),await Il(this.plugin,{bookmarks:u,categories:c.categories}),fv(this.plugin,u,c.categories,!0)};const i=r.createEl("button",{text:"Open"});i.onclick=()=>{this.close(),on.spawnWebBrowserView(this.plugin,!0,{url:this.url})}}onClose(){const{contentEl:n}=this;n.empty()}}module.exports=Nse; - -/* nosourcemap */ \ No newline at end of file diff --git a/.obsidian/plugins/surfing/manifest.json b/.obsidian/plugins/surfing/manifest.json deleted file mode 100644 index 8b145686..00000000 --- a/.obsidian/plugins/surfing/manifest.json +++ /dev/null @@ -1,15 +0,0 @@ -{ - "id": "surfing", - "name": "Surfing", - "version": "0.9.14", - "minAppVersion": "1.4.0", - "description": "Surf the Net in Obsidian.", - "author": "Boninall & Windily-cloud", - "authorUrl": "https://github.com/Quorafind", - "isDesktopOnly": true, - "fundingUrl": { - "Buy Me a Coffee": "https://www.buymeacoffee.com/boninall", - "爱发电": "https://afdian.net/a/boninall", - "支付宝": "https://cdn.jsdelivr.net/gh/Quorafind/.github@main/IMAGE/%E6%94%AF%E4%BB%98%E5%AE%9D%E4%BB%98%E6%AC%BE%E7%A0%81.jpg" - } -} \ No newline at end of file diff --git a/.obsidian/plugins/surfing/styles.css b/.obsidian/plugins/surfing/styles.css deleted file mode 100644 index cdd65253..00000000 --- a/.obsidian/plugins/surfing/styles.css +++ /dev/null @@ -1 +0,0 @@ -._root_1cexp_1{align-items:center;display:grid;grid-template-columns:auto auto 1fr auto;height:32px;padding-inline-end:8px;border-bottom:solid 1px #eee;border-radius:var(--size-2-2)}._root_1cexp_1:hover{background:var(--color-base-30)}._root_1cexp_1._isSelected_1cexp_15{background:var(--color-base-40);border-radius:var(--size-2-2)}._expandIconWrapper_1cexp_20{align-items:center;font-size:0;cursor:pointer;display:flex;height:24px;justify-content:center;width:24px;transition:transform linear .1s;transform:rotate(0)}._expandIconWrapper_1cexp_20._isOpen_1cexp_32{transform:rotate(90deg)}._labelGridItem_1cexp_36{padding-inline-start:8px;width:100%;overflow:hidden}._pipeY_1cexp_42{position:absolute;border-left:2px solid #e7e7e7;left:-7px;top:-7px}._pipeX_1cexp_49{position:absolute;left:-7px;top:15px;height:2px;background-color:#e7e7e7;z-index:-1}._root_1k2x2_1{background-color:#1967d2;height:2px;position:absolute;right:0;transform:translateY(-50%);top:0}._app_byfxz_1{height:100%;margin:var(--size-4-2);border-radius:var(--size-2-2)}._container_byfxz_7,._treeRoot_byfxz_11{height:100%}._draggingSource_byfxz_15{opacity:.3}._placeholderContainer_byfxz_19{position:relative}._dropTarget_byfxz_23{background-color:var(--color-accent)}._root_wotdi_1{align-items:"center";background-color:#1967d2;border-radius:4px;box-shadow:0 12px 24px -6px #00000040,0 0 0 1px #00000014;color:#fff;display:inline-grid;font-size:14px;gap:8px;grid-template-columns:auto auto;padding:4px 8px;pointer-events:none}._icon_wotdi_16,._label_wotdi_17{align-items:center;display:flex}.wb-view-content{padding:0!important;overflow:hidden!important}.wb-frame{width:100%;height:100%;border:none;background-color:#fff;background-clip:content-box}.wb-view-content:has(.wb-bookmark-bar) .wb-frame{height:calc(100% - 32px)}.wb-header-bar:after{background:transparent!important}.wb-search-bar{width:100%}.wb-search-box{display:flex;flex-direction:row;position:absolute;z-index:20;top:35px;right:200px;width:200px;height:44px;background-color:var(--color-base-20);padding:7px;border:var(--input-border-width) solid var(--background-modifier-border)}.wb-search-input{width:60%;height:100%}.wb-search-button-group{width:40%;height:100%;display:flex;flex-direction:row}.wb-search-button{display:flex;align-items:center;width:100%;height:var(--input-height);border:var(--input-border-width) solid var(--background-modifier-border);background-color:var(--background-modifier-form-field);margin-left:4px}.wb-page-search-bar-input-container,.wb-page-search-bar-input{width:500px;min-width:20px;height:44px!important;border-radius:15px!important;margin-bottom:20px;margin-left:auto;margin-right:auto}.workspace-split:not(.mod-root) .wb-page-search-bar-input-container{width:250px}.workspace-split:not(.mod-root) .wb-page-search-bar-input{width:250px}.wb-page-search-bar{flex-direction:column-reverse}.wb-page-search-bar .wb-empty-actions{display:none}.wb-page-search-bar .empty-state-container{padding-top:100px}.wb-random-background .empty-state{background:url(https://source.unsplash.com/random/?mountain) no-repeat center center;background-size:cover}.wb-random-background .empty-state input{filter:opacity(.8)}.wb-search-bar-container{margin-left:auto;margin-right:auto;position:absolute;top:26%}.wb-page-search-bar-container .wb-last-opened-files{display:grid;grid-template-columns:1fr 1fr 1fr 1fr;gap:var(--size-4-2);justify-items:center;margin-top:var(--size-4-12)}.wb-page-search-bar-container .wb-last-opened-files .wb-last-opened-file{display:flex;flex-direction:row;align-items:center;gap:var(--size-2-2);background-color:var(--interactive-normal);box-shadow:var(--input-shadow);opacity:.6;justify-content:flex-start;width:160px;height:40px;cursor:pointer}.wb-page-search-bar-container .wb-last-opened-files .wb-last-opened-file:hover{opacity:1}.wb-page-search-bar-container .wb-last-opened-files .wb-last-opened-file-name{overflow:hidden;text-overflow:ellipsis;white-space:nowrap}.wb-page-search-bar-text{text-align:center;margin-bottom:20px;font-size:72px;font-weight:bolder;color:var(--color-accent)}.wb-create-btn,.wb-search-btn{opacity:.4;color:#9da7d9}.wb-close-btn{opacity:.4;color:#d99da8}.wb-icon-list-container button{padding:1px 6px}.wb-create-btn:hover,.wb-search-btn:hover,.wb-close-btn:hover{opacity:1}.wb-close-btn:hover>button>.lucide-x-square{color:#d99da8}.wb-close-btn>button>.lucide-x-square{color:var(--color-red)}.wb-icon-list-container{margin-right:auto;margin-left:auto;position:absolute;bottom:12%;display:flex;flex-direction:row;gap:10px}.wb-btn-tip{color:var(--color-base-60)}.wb-btn:hover{background:var(--color-accent)!important}.wb-btn{filter:drop-shadow(0 4px 3px rgb(0 0 0 / .07)) drop-shadow(0 2px 2px rgb(0 0 0 / .06))}.theme-dark .wb-btn a{color:var(--color-base-80)!important}.setting-item.search-engine-setting{flex-wrap:wrap}.search-engine-setting .setting-item-control{flex:1 1 auto;text-align:right;display:flex;justify-content:flex-end;align-items:center;gap:var(--size-4-2)}.search-engine-setting .search-engine-main-settings{width:100%;display:flex;flex-direction:column;border-top:solid 1px var(--background-modifier-border);margin-top:10px}.search-engine-main-settings-name{display:flex;justify-content:space-between;align-items:center;margin-bottom:5px;margin-top:5px}.search-engine-main-settings-url{display:flex;justify-content:space-between;align-items:center;margin-top:5px}.search-engine-setting .setting-item-name:before{content:"";display:inline-block;height:20px;width:1px;border-left:3px solid var(--text-accent);vertical-align:middle;margin-right:10px;margin-left:0}.wb-setting-title{display:flex;justify-content:space-between;flex-direction:row;align-items:center}.wb-setting-tab-group{display:flex;justify-content:flex-start}.wb-setting-searching{opacity:.4}.wb-tab-settings textarea{width:500px;height:200px;overflow-y:scroll}.wb-navigation-item{display:flex;align-items:flex-start;gap:3px;margin-right:10px;margin-bottom:2px;padding:6px 5px 4px;border-radius:5px}.wb-navigation-item-selected{background-color:var(--interactive-accent);color:var(--text-on-accent)}.wb-setting-header{border-bottom:var(--color-base-40) 0px solid}.wb-tab-settings{margin-bottom:20px}.wb-setting-heading{color:var(--color-accent)}.wb-about-icon{height:72px;text-align:center}.setting-item-control .surfing-setting-textarea{height:400px;width:200px}.setting-item-control .surfing-setting-input{width:400px}.wb-about-icon .surfing{height:72px!important;width:72px!important}.wb-about-text{font-size:16px;color:var(--color-accent)}.wb-about-card{display:flex;align-items:center;flex-direction:column;margin-top:30px}.wb-about-version{font-size:14px;text-decoration:unset!important;opacity:.8;color:var(--link-color)}.surfing-settings-icon{width:fit-content;height:fit-content;position:absolute;right:20px}.mod-wb-bookmark-bar .surfing-settings-icon{top:calc(var(--header-height) + 40px)}.wb-frame-notice{text-align:center;background-color:var(--color-yellow);font-size:14px;padding-top:4px;padding-bottom:4px}.wb-search-suggestion-container{background-color:var(--color-base-10);border-radius:var(--radius-l);filter:drop-shadow(0 4px 3px rgb(0 0 0 / .07)) drop-shadow(0 2px 2px rgb(0 0 0 / .06))}.wb-search-suggestion{border-radius:var(--radius-l);margin-bottom:-1px}.wb-search-suggestion:has(.wb-bookmark-suggest-item){max-height:300px;overflow-y:auto}.wb-search-suggestion::--webkit-scrollbar{display:none}.wb-search-suggest-item.is-selected{background-color:var(--color-accent)}.theme-light .wb-search-suggest-item.is-selected{color:var(--color-base-10)}.wb-search-suggest-item:first-child.is-selected{border-radius:var(--radius-l) var(--radius-l) var(--radius-m) var(--radius-m)}.wb-search-suggest-item:last-child.is-selected{border-radius:var(--radius-m) var(--radius-m) var(--radius-l) var(--radius-l)}.wb-search-suggest-item{display:flex;justify-content:space-between;align-items:center}.theme-light .wb-search-suggest-item.is-selected .wb-search-suggestion-index{color:var(--color-base-10);opacity:.6}.wb-search-suggestion-index{opacity:.2;font-size:12px;font-weight:700}input[type=text].wb-search-bar:active,input[type=text].wb-search-bar:focus,input[type=text].wb-search-bar:focus-visible{box-shadow:unset}input[type=text].wb-page-search-bar-input:active,input[type=text].wb-page-search-bar-input:focus,input[type=text].wb-page-search-bar-input:focus-visible{box-shadow:unset}.wb-theme-settings-working-on{background-color:var(--color-accent);flex-direction:column;border-radius:var(--radius-l)}.theme-light .wb-theme-settings-working-on .setting-item-name{color:var(--color-base-10)}.wb-omni-box{position:absolute;right:var(--size-4-9);top:var(--size-4-18);width:30%;height:fit-content;max-height:40%;overflow:auto;border-radius:var(--radius-m);padding:var(--size-4-4)}.wb-omni-box::-webkit-scrollbar{display:none}.theme-light .wb-omni-box{background-color:var(--color-base-10)}.theme-dark .wb-omni-box{background-color:var(--color-base-30)}.wb-omni-item-path{margin:var(--size-2-3) var(--size-2-2);text-emphasis:inherit;overflow-x:hidden;padding:var(--size-2-1);border-radius:var(--radius-s)}.theme-light .wb-omni-item-path{color:var(--color-base-20)}.wb-omni-item{margin:var(--size-2-3);background-color:var(--color-accent);padding:var(--size-2-1);border:var(--color-accent) 1px solid;border-radius:var(--radius-s)}.wb-omni-item-content-list{margin:var(--size-2-2) var(--size-2-1);gap:var(--size-2-1);border-radius:var(--radius-m)}.theme-light .wb-omni-item-content-list{background-color:var(--color-base-10)}.theme-dark .wb-omni-item-content-list{background-color:var(--color-base-30)}.wb-content-list-text{padding:var(--size-2-2) var(--size-2-1);line-height:var(--size-4-5);background-color:var(--color-base-10);border:var(--color-accent) 1px solid;width:initial;overflow-x:hidden;border-radius:var(--radius-m);margin-bottom:var(--size-4-3)}.theme-light .wb-content-list-text{background-color:var(--color-base-20);filter:drop-shadow(0 4px 3px rgb(0 0 0 / .07)) drop-shadow(0 2px 2px rgb(0 0 0 / .06))}.theme-dark .wb-content-list-text{background-color:var(--color-base-30)}.mod-wb-bookmark-bar .empty-state.wb-page-search-bar{position:unset}.wb-bookmark-bar{display:flex;align-items:center;overflow:hidden;padding-bottom:var(--size-2-1);padding-top:var(--size-2-1);padding-left:var(--size-4-2);min-height:32px;border-top:1px solid var(--background-modifier-border);border-bottom:1px solid var(--background-modifier-border)}div[data-type^=empty] .wb-bookmark-bar{position:absolute;top:var(--header-height);width:100%;margin-top:-1px;z-index:1}.wb-bookmark-item,.wb-bookmark-folder{max-width:120px;text-overflow:hidden;overflow:hidden;margin-right:var(--size-2-2);padding:var(--size-2-2);border:1px solid var(--color-base-10);border-radius:var(--radius-s);white-space:nowrap;width:10%;display:flex;align-items:flex-end;align-content:flex-end}.wb-bookmark-item:hover,.wb-bookmark-folder:hover{background-color:var(--color-base-30)}.wb-bookmark-item-title,.wb-bookmark-folder-title{text-overflow:ellipsis;overflow:hidden;padding-right:var(--size-4-1);padding-left:var(--size-2-1);font-size:var(--font-smallest)}.wb-bookmark-bar::-webkit-scrollbar{display:none}.wb-bookmark-bar-container{display:flex;width:95%;overflow-x:scroll}.wb-bookmark-bar-container::-webkit-scrollbar{display:none}.wb-bookmark-folder-icon,.wb-bookmark-item-icon{padding:unset;height:16px;margin-right:var(--size-2-2)}.wb-bookmark-folder-icon .lucide-folder-open,.wb-bookmark-item-icon .lucide-album{height:var(--size-4-4);width:var(--size-4-4)}div[data-type^=empty].workspace-leaf-content .view-content.mod-wb-bookmark-bar{padding:unset;overflow:auto}.surfing-bookmark-manager-header-bar{display:flex;justify-content:start}.surfing-bookmark-manager{margin:0 1em;display:flex;flex-direction:column}.surfing-bookmark-manager-header-bar .surfing-bookmark-manager-search-bar{display:flex;align-items:center}.surfing-bookmark-manager-search-bar .ant-input-affix-wrapper{padding:0 11px}.surfing-bookmark-manager-header-bar .ant-row{display:flex;align-items:center}.ant-table-header{min-height:55px}.surfing-bookmark-manager-header-bar{height:50px}:where(.css-dev-only-do-not-override-1np4o0i).ant-input-affix-wrapper{background-color:var(--background-modifier-form-field)}.surfing-bookmark-manager-header-bar button{margin:0 0 0 10px}.wb-bookmark-manager-entry{position:absolute;right:var(--size-4-3);padding:var(--size-2-1);border-radius:var(--radius-s);color:var(--color-red)}.wb-bookmark-manager-icon{height:18px;width:18px;display:flex}.wb-refresh-button,.wb-refresh-button .lucide-refresh-cw{height:var(--size-4-4);width:var(--size-4-4);color:var(--color-base-50)}.wb-refresh-button{margin-right:var(--size-4-2)}.ant-table-wrapper .ant-table-pagination.ant-pagination{margin:6px 0}.ant-table-container{height:100%;overflow:hidden;display:flex;flex-direction:column}.ant-table-wrapper,.ant-spin-nested-loading,.ant-spin-container{height:100%}.ant-table-wrapper{height:86vh}.ant-table-wrapper .ant-table-thead>tr>th{background-color:var(--background-secondary)}.ant-table{height:100%}.wb-reset-button{left:0}.ant-form-item .submit-bar{display:flex;justify-content:space-between}.theme-light .ant-btn-primary{background-color:#1677ff}.theme-light .ant-tree-treenode-checkbox-checked .ant-tree-node-content-wrapper{color:var(--text-on-accent)}.surfing-bookmark-manager-header .ant-col-6{display:flex;align-items:center}div[data-type^=surfing-bookmark-manager] .ant-table-thead{height:20px}.wb-bookmark-manager-entry:hover{background-color:var(--color-base-30)}.cm-scroller .wb-view-content-embeded{height:500px}.suggestion-item.wb-bookmark-suggest-item{display:flex;align-items:center;justify-content:space-between}.wb-bookmark-suggest-container{display:flex;gap:10px;max-width:92%}.wb-bookmark-suggestion-text{font-weight:bolder;overflow-x:hidden;text-overflow:ellipsis;white-space:nowrap}.wb-bookmark-suggestion-url{opacity:.4;overflow-x:hidden;text-overflow:ellipsis;white-space:nowrap}.wb-bookmark-modal h2{text-align:center}.wb-bookmark-modal .wb-bookmark-modal-btn-container{display:flex;align-items:center;flex-direction:row;gap:10px}.wb-bookmark-modal .modal-content{display:flex;flex-direction:column;align-items:center}.anticon-arrow-right svg{width:var(--icon-xs);height:var(--icon-xs)}.tab-tree-empty-container svg{width:var(--icon-xl);height:var(--icon-xl);opacity:50%}.tab-tree-empty-container{display:flex;align-items:center;text-align:center;justify-content:center}.tab-tree-empty-state{display:flex;flex-direction:column;align-items:center;gap:1em;opacity:30%}div[data-type^=surfing-tab-tree] ul,div[data-type^=surfing-tab-tree] ul li{list-style:none;margin:0;padding:0}.surfing-hover-popover{height:400px}.surfing-embed-website{height:800px}.surfing-hover-popover .surfing-hover-popover-container,.surfing-hover-popover .wb-view-content.node-insert-event,.surfing-embed-website .surfing-embed-website-container,.surfing-embed-website .surfing-embed-website-container .wb-view-content.node-insert-event{height:100%}.popover.hover-editor .popover-content:has(div[data-type^=surfing-view]){width:100%}.cm-browser-widget{border:1px solid var(--background-modifier-border)}.cm-browser-widget .wb-browser-inline{height:max(4vw,400px)}.cm-browser-widget .wb-show-original-code{position:absolute;right:var(--size-4-2);top:var(--size-4-2);visibility:hidden}.cm-browser-widget:hover .wb-show-original-code{visibility:visible}.surfing-hover-popover{z-index:99999} diff --git a/templates/flipside excalidraw note.md b/templates/flipside excalidraw note.md index 89c0a7d0..4b6d07dd 100644 --- a/templates/flipside excalidraw note.md +++ b/templates/flipside excalidraw note.md @@ -9,20 +9,8 @@ excalidraw-open-md: true `$= "![[" + dv.current().file.name + ".svg|700]]" ` %% -# Excalidraw Data -## Text Elements ## Drawing ```compressed-json -N4KAkARALgngDgUwgLgAQQQDwMYEMA2AlgCYBOuA7hADTgQBuCpAzoQPYB2KqATLZMzYBXUtiRoIACyhQ4zZAHoFAc0JRJQgEYA6bGwC2CgF7N6hbEcK4OCtptbErHALRY8RMpWdx8Q1TdIEfARcZgRmBShcZQUebTieGjoghH0EDihmbgBtcDBQMELoeHF0IKI5JH4ixhZ2LjQAZgB2asha1k4AOU4xbkaAVmaARmGBnka+PMhCDmIsbghcYaTC - -meYAERSoBGJuADMCMLaIEkWAYRgAeQBJBGUAGRuoIwBrACUAaTYAKRuABQAagBZADigNWRX2hHw+AAyrBgotBB5IQIoKQ2K8EAB1EjqbhTNYQZgYrEIBEwJESFFnE6YvySDjhLJoYYnNhwXDYNQwbjDAAMApO1mU1NQwumEEw3GcwwAHANtAMAJwDRoqgAszUaADZ5ZqeAL5Sc+Wg5Zrddp5brmgLVULNSrGoaiUVSZjsec2Pg2KRFgBiYYIYPBt - -EQTTc17KBlzb2+/0SDHWZhcwIZcMUfGSQkCxraYbNHg8G0Kw2C8YnSQIQjKaS5/OF4ulg08CtugQIXb8u2TZ3Gk1SmPCOA3Yis1DZAC6J325DSY+4HCEsPpwjmzOYE4KxNgiEJ0wAvidNOviABRYJpDIT6cnIRwYi4HZ7NkjXUll3ywbNXUnIgcK8S4rvg/5sNg2Kvqghz4GEeSHuAM50LgcBwAiz6lDu0DVmkixPqQQHVAwhAIBQABCUbDkIcY+ - -n6gb7AxjGQhA2AiOmUDPKkCKegg8Z0RIQYhkJzGsaQ7GcfoFEQVRNEJosyYcKmuDsSJbHpBxOz6AAYjC8KIqUJI+nSeQsWpGQSdx5J4sQBJoG6pliepFlktilLioZqJEaJ4mae8wh1puE7siZ3lOZpVxcjysD8kKXlmRpqRaZwUBabg+gwmaqDBUUoXmZpSUZHChBGKURpxY5eWpAAKlgUAAIJEMoDToME+xQKpFUJfo6GkPVYlsBQ1a4FBy6riF - -8USeecx1f1g0hFBSyzcxzDYJisIABrcJqAzDNaAq6sMPC6rqmranqg7uqtPr4AAmoSarKsdApFsMH7yiWKp/iZRhsAY3A7u0BBCKV8HlT5qR+dRxCBYssZ7ERMYkEVJXAWNRSI8QfGJqgAMQGRPoLQG5wqsTxNaVp4bvPcK7KYG54bPT9PkxAoPjZ1lnYhFUD1BOo2gSZcCBGYwjMKCpBI8VpR80Rc5pQgVNzEwszKP9UrpLgmjBFBGLAyc2AVNw - -OsICcHBy4bpC61KwhQABpRG6zRR2AAVgg2CZHCptwMCbDzFNGtawcRwIOAx50NCwTbghh5AA +N4IgLgngDgpiBcIYA8DGBDANgSwCYCd0B3EAGhADcZ8BnbAewDsEAmcm+gV31TkQAswYKDXgB6MQHNsYfpwBGAOlT0AtmIBeNCtlQbs6RmPry6uA4wC0KDDgLFLUTJ2lH8MTDHQ0YNMWHRJMRZFAHZFAFYyJE9VGEYwGgQAbQBdcnQoKABlALA+UFkYOIQQXHR8AGtoyXw8bOwNPkZOTExyHRgiACF0VErarkZcAGF6THp8UoBiADN5hZAAXyWgA ``` %% \ No newline at end of file diff --git a/épistémologie.problème de l'induction.md b/épistémologie.problème de l'induction.md index b7b58820..ff326207 100644 --- a/épistémologie.problème de l'induction.md +++ b/épistémologie.problème de l'induction.md @@ -1,9 +1,9 @@ --- -aliases: - - problème de l'induction tags: - s/philosphie - excalidraw +aliases: + - problème de l'induction up: - "[[épistémologie]]" excalidraw-plugin: parsed @@ -12,23 +12,45 @@ excalidraw-open-md: true + `$= "![[" + dv.current().file.name + ".svg|700]]" ` -%% # Excalidraw Data + ## Text Elements +Induction ^TlvtH7SW + + ^wt0T5FDI + +%% ## Drawing ```compressed-json -N4KAkARALgngDgUwgLgAQQQDwMYEMA2AlgCYBOuA7hADTgQBuCpAzoQPYB2KqATLZMzYBXUtiRoIACyhQ4zZAHoFAc0JRJQgEYA6bGwC2CgF7N6hbEcK4OCtptbErHALRY8RMpWdx8Q1TdIEfARcZgRmBShcZQUebTieGjoghH0EDihmbgBtcDBQMELoeHF0IKI5JH4ixhZ2LjQAZgB2asha1k4AOU4xbkaAVmaARmGBnka+PMhCDmIsbghcYaTC +N4KAkARALgngDgUwgLgAQQQDwMYEMA2AlgCYBOuA7hADTgQBuCpAzoQPYB2KqATLZMzYBXUtiRoIACyhQ4zZAHoFAc0JRJQgEYA6bGwC2CgF7N6hbEcK4OCtptbErHALRY8RMpWdx8Q1TdIEfARcZgRmBShcZQUebQB2bQBWGjoghH0EDihmbgBtcDBQMBKIEm4IAHEASUlCAAkAOQBhZvqAKQAOQOaAZQAGAAUARSMADQBrVJLIWEQK3FJSNip+ -meYAERSoBGJuADMCMLaIEkWAYRgAeQBJBGUAGRuoIwBrACUAaTYAKRuABQAagBZADigNWRX2hHw+AAyrBgotBB5IQIoKQ2K8EAB1EjqbhTNYQZgYrEIBEwJESFFnE6YvySDjhLJoYYnNhwXDYNQwbjDAAMApO1mU1NQwumEEw3GcwwAHANtAMAJwDRoqgAszUaADZ5ZqeAL5Sc+Wg5Zrddp5brmgLVULNSrGoaiUVSZjsec2Pg2KRFgBiYYIYPBt +UsxuZwBGAE4ANm0AFi3+zsOAZh5+86Tzw73ztcgYTa3zre0d3c6eJN3z87xPZPCAUEjqbgXc6fPZJW79eLXW73EGSBCEZTSbg/ZJJHidd79JJ7PY8Q4g6zKYLcfog5hQZYTBDNNj4NikCoAYi2CB5POmpU0uGwE2UyyEHGILLZHIkDOszDgiyyUAFkAAZoR8PherBqRJBB41RB6YyEAB1cGSSF0hlsJm6mD69CG8og8WYjjhXJoLYgthK7BqF6+/ -EQTTc17KBlzb2+/0SDHWZhcwIZcMUfGSQkCxraYbNHg8G0Kw2C8YnSQIQjKaS5/OF4ulg08CtugQIXb8u2TZ3Gk1SmPCOA3Yis1DZAC6J325DSY+4HCEsPpwjmzOYE4KxNgiEJ0wAvidNOviABRYJpDIT6cnIRwYi4HZ7NkjXUll3ywbNXUnIgcK8S4rvg/5sNg2Kvqghz4GEeSHuAM50LgcBwAiz6lDu0DVmkixPqQQHVAwhAIBQABCUbDkIcY+ +q0wqQMXCODVYg+1B5AC6IPV5Eyse4HCE2pBhElWAW/WN4slXuY8eKM2g8HEvAjAF86QgEMRuOd+ltOnsu/1gRGGExWJxuJ2+1XGCx2BxGpwxCOeOdOh3STtw1XCMwACLpKAt7jqghhEGaYSSgCiwUy2XjBRmRQjpTmtegWFVT1K5QkABV8PQoPV4l6c0IAfesI2TfshDgYhcF3VtfXiJJexOHYkPiHZHn7IgOCmNAsxzLC2BFPc0APfAj37OA2Dz -n6gb7AxjGQhA2AiOmUDPKkCKegg8Z0RIQYhkJzGsaQ7GcfoFEQVRNEJosyYcKmuDsSJbHpBxOz6AAYjC8KIqUJI+nSeQsWpGQSdx5J4sQBJoG6pliepFlktilLioZqJEaJ4mae8wh1puE7siZ3lOZpVxcjysD8kKXlmRpqRaZwUBabg+gwmaqDBUUoXmZpSUZHChBGKURpxY5eWpAAKlgUAAIJEMoDToME+xQKpFUJfo6GkPVYlsBQ1a4FBy6riF +HJ8gfW87zXRiHwgu8GJmLZOKOMlUJ2Q59k6fYeB2d8wGceJOm0Tokk6Tj8RJeItniQ5OhY992JKd5DgSaSzkuJE7jHO9nBxOF8UJYlSUONT6IfMAoRhOF20RG5DNE5x+gSFStguOESTJMN+h2FjwJBfBQigFl9H0NQ4MGajsjou8Lh2bRri2X533E7RARQwEklC/t6UWKAACE80cDhlEzbN8BBLJiHKyU82qvDattEqAEElhWNFcHg1B8Lq/sGu6 -8USeecx1f1g0hFBSyzcxzDYJisIABrcJqAzDNaAq6sMPC6rqmranqg7uqtPr4AAmoSarKsdApFsMH7yiWKp/iZRhsAY3A7u0BBCKV8HlT5qR+dRxCBYssZ7ERMYkEVJXAWNRSI8QfGJqgAMQGRPoLQG5wqsTxNaVp4bvPcK7KYG54bPT9PkxAoPjZ1lnYhFUD1BOo2gSZcCBGYwjMKCpBI8VpR80Rc5pQgVNzEwszKP9UrpLgmjBFBGLAyc2AVNw +5YKD6gaIEWcbjSCE8KBI1AyLCQpG0KStIE/dAfz/ACgONJ8Kl3TA337DY0GcUluMQ2E9n6fj2zJEEQ1QZwAoSPYdiuXseB4RSdk6EEwWICFfT2RJfL4857puLZiT4fs0QxLE0EOJIKSq50mIEO0mWldkuT5XkkGPYVRRLKVWUJuVyA4RVlUSlMtR1PVn1dVtbTNS1QetNBEarU17QQR1nRNVk3X7D1JDLeM/X7ANhWDEcwxBKMoNjG9WNKVNcHTA -OsICcHBy4bpC61KwhQABpRG6zRR2AAVgg2CZHCptwMCbDzFNGtawcRwIOAx50NCwTbghh5AA +ahtzfMLvQXAtmLU9iFlmqCMF5sBpJJJFMuHyQQnIcuF9cl+3dqcZw4Od+YRL5ZMBoytq3HclpWhBj0ti8MhVeNDcg6DYKWpSkL2FD8UQnGIGw3DBvawjiIGmOQRO1UJGqSUhGwKAp2LSgv1fCpa+IevG+HFNOCgXpCCMWsYd77IADE9a1V6Mf7KvOqIZRPfQMRsiYY0J0b9x54xJeIBi4hiGpEE9GyXA8yYDMJBqOomlaDpumZAYRnGKZ/VIDE8w + +IVvTvbuuG6bikQgoBsAAErhEHrWBkQhY5YXPvUdEmJq6oA+L8NaaxNplCmhQKA/QvxJDHpuaoh0azHTbiCE2zhLhxD+NJIKi54iGWki9TYsJtCdn6AudKiJuwkmBlaEcHxdg7GEV9FSZIeCdiBkjBBqNUA3ExlSWs+chb42prKdA3ISb8jJiKdWkoCbqOgHTBmgQmb9k1NqUW7MJacyKnjC0/C0Zc2FlYioHMLZ+Blt6Ec/pAzK1DPndWMY4z5G1 + +hqNMCBL7F1th+Y2CxDgeNLN4tqMSBD2xHJ0CScMti8J9oOJu/M3b5M4P7QOqBAZXHePECSuZI7BDgvuQ8MCqwnglMQBOV5aIpOGlWKCMEGkISzihCG8QJFhTzEXFOVY2Rl0aeRZpj424SGbhQb+SCIDGnVH3AeQ9uD4lHlACeMV8DT0rq+bei8KjBHVGdccTBN4EAubvYBcBjQnyiOfUgUSpmlHZB/DgX8lnoGNLgIBoDwG7LQFAhZkBsIIHgSjJ + +BKCColHWiUDB20ICdAANL0CMLgAAjlVYh8w5RkPOpsS40IMq/B4KSH6mSpFVleldSSP0tIqQRMpTsfDeYjm0ucYRwjFz7BUhcRCqIZFIPRgo7GzjVEyiJlo0m/YhS6MpgY46xilSmNuTrFmriDQ2ONCohxfKnF2LNIal0xr3TCE9Mk5BvilawBVoE8UwStYpgid8ku644kSFwDwRJVtHU/LSUtFKhwFzZ29ncycw40CYXjR7UptYMpJD4qcDK4cy + +h1IQAM1A0K45tI6UnG2PTSh9PTgNTOyEgrZ3xPnQuFawpESZOXJpZyf7LPdC3IFGyDk7OHrmrZ49J4nO4DPKsc8F672uXqyAG9zCPLncdAMby+5ny9F8g2frfnv38ICntwLAHALAawSFRbSDQPGV6BFiCRzaFQai9B/ZMWdUOMMc4Qh6gABkOSVxIWSn+5DNjCTiFUhSecIPMMuvdNhS5OFVN7JkgWpQQZg2QYIoV+w+I/GjZIyViK2zTtKJSOVl + +rhaaokJo4mxo1UU0tjR9A8p6Y6pVJsg1bM3G2qo0yHmWG42lFNda8WRo7WeOtr6Z1QZXUBLVh6zWoTvV60iXu1JZQA2m3OCG6T0TK0RoGjJTihxlKriKQmpeubfYlNnLWB4JIEQImExHbc9SlrFtVfHS85bukgmrYWut2cG3EldrAnCrbS4drmRRGdA6VlrIqJs7ZEC9lMp1n3I5U8p3dqgE8q5CAbnr3uSu/ABW5QbuPluz5vrNN/KPfgJLvb+y + +gvPRCyBN6YUFzgVKp9L6wBgTAKxaacA4C6nTtwSs0A0SZAqDBUgr9CgMEIAgCgpVyZ6Kpoq2j6o9v7YFBAbAIhdXVF3PoXUZoWMQDoyTQ7x2lgqjOxkDb6rmNqK1QqDjZjSgPdO+dse3GnTWIk8tv7T3zuXeFoJvmqBhNHZOxDjIUOHQ8aNaD37iPsjPf0CA+1Xjyw+LB1jqAOOADyfj5PINVsTx72OAdZYnac2n/2Mhj1S1e/ZLOkf6DWRV9AC7 + +7sk5xxN0g+WeoTRCBpytCO6ek/O2eSUY1epS4WBLoXcucfK9WcB9AlNDvMGwMsbUYxuD3Q+E7aNXZOKAkuPENYJojesnwAATTbNdR6NxUKCRuHieHRg2AGCmz7Ag0Cn2wjhASNB3P6cZDx20/TEB9cO7FCQYdeymIQFT8QXUCBXloDI1n9+xAACybBiAIEV7gTQwRO3zJT8XljGDSqsimqQZQQoAAUEj7e8CUtQPvvfPJJAAJTGjAcobMiwKjt67 + +5wgf8/eDtgH8Psf0fMdy5RwgCn3d6ZRZ1hEsB+Z37ErQBgrI1fa/cC81WbARB8/Xtvf2AFc2oVdf9EAuF1+uvr8gHYAAVggA3MwL0ACnAGXhXlXjXtHF2stsKI3IwF+IHvgMHjOrriaOkP/D3P2MdvSAYF+LruGgXO2jAfXlhBFJ1P/IgcgRWmtOAGihABYuEFNmBPWEAA== ``` %% \ No newline at end of file diff --git a/épistémologie.problème de l'induction.svg b/épistémologie.problème de l'induction.svg new file mode 100644 index 00000000..fb46e61e --- /dev/null +++ b/épistémologie.problème de l'induction.svg @@ -0,0 +1,10 @@ + + + + + + + + Induction \ No newline at end of file