Maya is the industry standard. Blender may offer most of what Maya offers from a features standpoint, but Maya had the first mover advantage / network effects advantage of an ecosystem of third party plugins sprouting up over the last 20 years which VFX / Mograph houses now depend upon, thus forcing vendor lock-in despite Blender now being on par.
Hello fellow astro-developer. What's the source of the underlying ephemeris data? I've built a few astro API's in the past and never found anything better than Swiss Ephemeris. Wonder what you're using. And what the purpose of this app is, for you.
Hey! I’m using ephemerides from JPL (Development Ephemeris) and IMCCE (INPOP). I also developed an open source library for extracting data from these ephemerides: https://github.com/rhannequin/ruby-ephem
These files provide geometric barycentric data. So they need quite a bunch of transformations and physical corrections to become accurate coordinates usable by an astronomer. That’s Astronoby.
And finally Caelus is the project that uses the final data for astronomy use-cases.
My goal is to present data in a nice way, but more importantly to provide a fully open source experience. Everything is open source starting from the ephemeris file up to the website itself. The data on the website is not particularly new, although I’m trying my best to offer a good UX. What is new in my opinion is to be able to trace all the logic that produced it.
There is simply put no ongoing process and no feedback loop. The model does not learn. The cognition ends when the inference cycle ends. It's not thinking, it just produces output that looks similar to the output of thinking. But the process by which it does that is wholly unreleated.
Most AI tooling is shipped with a feedback loop around the LLM. The quality of Claude Code for example lies in the feedback loop it provides on your code. Maybe the LLM itself isn't thinking, but the Agent which ships an LLM plus feedback loop definitely shows thinking qualities.
Just now in an debugging session with claude code:
* let me read this file...
* let me read this file...
* I think there's a caching issue with the model after dropping the module. Let me check if there's a save or reload needed after DROP MODULE. First, let me verify something:
* creates a bash/javascript script to verify its assumption
* runs the script (after review and approval)
* Aha! I found the problem! Look at the output...
without getting into theory of mind it's a bit difficult to elaborate, and I don't have the time or the will for that. But the short version is that thinking is interconnected with BEING as well as will, and the Agent has neither, in a philosophically formal sense. The agent is deterministically bound. So it is a fancy Rube Goldberg machine that outputs letters in a way that creates the impression of thought, but it is not thought, in the same way that some birds can mimic human speech without even the slightest hint as to the words' or sentences' meaning, underlying grammar, connotations, subtext, context, intended use, likely effect, etc. Is speech speech if the speaker has no concept whatsoever of said speech's content, and can not use it to actualize itself? I'd say no. It's mimicry, but not speech. So that means speech is something more than just its outward aspect - the words. It is the relation of something invisible, some inner experience known only to the speaker, VIA the words.
Whereas a gorilla who learns sign language to communicate and use that communication to achieve aims which have direct correlation with its sense of self - that's thought in the Cogito, Ergo Sum sense of the word.
Thought as commonly concieved by the layman is a sort of isolated phenomenon that is mechanical in nature and can be judged by its outward effects; whereas in the philosophical tradition defining thought is known to be one of the hard questions for its mysterious qualia of being interconnected with will and being as described above.
Guess I gave you the long answer. (though, really, it could be much longer than this.) The Turing Test touches on this distinction between the appearance of thought and actual thought.
The question goes all the way down to metaphysics; some (such as myself) would say that one must be able to define awareness (what some call consciousness - though I think that term is too loaded) before you can define thought. In fact that is at the heart of the western philosophical tradition; and the jury consensus remains elusive after all these thousands of years.
The obvious counterargument is that a calculator doesn't experience one-ness, but it still does arithmetic better than most humans.
Most people would accept that being able to work out 686799 x 849367 is a form of thinking, albeit an extremely limited one.
First flight simulators, then chess computers, then go computers, then LLMs are the same principle extended to much higher levels of applicability and complexity.
Thinking in itself doesn't require mysterious qualia. It doesn't require self-awareness. It only requires a successful mapping between an input domain and an output domain. And it can be extended with meta-thinking where a process can make decisions and explore possible solutions in a bounded space - starting with if statements, ending (currently) with agentic feedback loops.
Sentience and self-awareness are completely different problems.
In fact it's likely with LLMs that we have off-loaded some of our cognitive techniques to external hardware. With writing, we off-loaded memory, with computing we off-loaded basic algorithmic operations, and now with LLMs we have off-loaded some basic elements of synthetic exploratory intelligence.
These machines are clearly useful, but so far the only reason they're useful is because they do the symbol crunching, we supply the meaning.
From that point of view, nothing has changed. A calculator doesn't know the meaning of addition, an LLM doesn't need to know the meaning of "You're perfectly right." As long as they juggle symbols in ways we can bring meaning to - the core definition of machine thinking - they're still "thinking machines."
It's possible - I suspect likely - they're only three steps away from mimicking sentience. What's needed is a long-term memory, dynamic training so the model is constantly updated and self-corrected in real time, and inputs from a wide range of physical sensors.
At some point fairly soon robotics and LLMs will converge, and then things will get interesting.
Whether or not they'll have human-like qualia will remain an unknowable problem. They'll behave and "reason" as if they do, and we'll have to decide how to handle that. (Although more likely they'll decide that for us.)
Some of your points are lucid, some are not. For example, an LLM does not "work out" any kind of math equation using anything approaching reasoning; rather it returns a string that is "most likely" to be correct using probability based on its training. Depending on the training data and the question being asked, that output could be accurate or absurd.
That's not of the same nature as reasoning your way to an answer.
So if you don’t have a long term memory, you’re not capable of sentience? Like the movie memento, where the main character needs to write down everything to remind him later because he’s not able to remember anything. This is pretty much like llms using markdown documents to remember things.
"To escape the paradox, we invoke what we call the “Homunculus Defense”: inside every human is a tiny non-stochastic homunculus that provides true understanding. This homunculus is definitionally not a stochastic parrot because:
1. It has subjective experience (unprovable but assumed)
2. It possesses free will (compatibilist definitions need not apply)
3. It has attended at least one philosophy seminar"[1]
For practical every day uses, does it really matter if it is "real thinking" or just really good "artificial thinking" with the same results? The machine can use artificial thinking to reach desired goals and outcomes, so for me it's the kind of thinking i would want from a machine.
It seems pretty clear to me though that being good at intellectual tasks / the sort of usefulness we ascribe to LLMs doesn't strongly correlate with awareness.
Even just within humans - many of the least intellectually capable humans seem to have a richer supply of the traits associated with awareness/being than some of the allegedly highest-functioning.
On average you're far more likely to get a sincere hug from someone with Down's syndrome than from a multi-millionaire.
But I'm more interested in this when it comes to the animal kingdom, because while ChatGPT is certainly more useful than my cat, I'm also pretty certain that it's a lot less aware. Meaningful awareness - feelings - seems to be an evolutionary adaptation possessed by k-strategy reproducing vertebrates. Having a small number of kids and being biologically wired to care for them has huge implications for your motivation as an animal, and it's reasonable to think that a lot of our higher emotions are built on hardware originally evolved for that purpose.
(Albeit the evolutionary origins of that are somewhat murky - to what extent mammals/birds reuse capabilities that were developed by a much earlier common ancestor, or whether it's entirely parallel evolution, isn't known afaik - but birds seem to exhibit a similar set of emotional states to mammals, that much is true).
You're moving the goalposts and contradicting yourself with language games.
Something doesn't need to learn to think. I think all the time without learning.
There's also an argument for machines already starting to crack learning with literal reinforcement training and feedback loops.
Your language game was when you said the 'cognition ends...', as cognition is just a synonym for thinking. "The thinking ends when the inference cycle ends. It's not thinking'" becomes a clear contradiction.
As for "the process by which it does that is wholly unrelated", buddy it's modelled on human neuron behaviour. That's how we've had this generative AI breakthrough. We've replicated human mental cognition as closely as we can with current technology and the output bears striking resemblance to our own generative capabilities (thoughts).
Happy to admit it's not identical, but it's damn well inside the definition of thinking, may also cover learning. It may be better to take a second look at human thinking and wonder if it's as cryptic and deep as we thought ten, twenty years ago.
Are you not an American? (Giving you the benefit of the doubt here)
In America, immigration enforcement is not a criminal issue but a civil issue. So the proper (as in, according to the laws and norms of the last many decades) and appropriate channels through which the enforcement of immigration is meant to be resolved is the courts. The current usage of ICE as a gestapo is literally illegal (it deprives "suspects" of due process and civil/human rights), in violation of Geneva conventions, and so on.
Furthermore even if we accept the blatantly immoral and illegal idea that federal agents should be able to break and enter into homes and kidnap, traumatize, and traffic people without the slightest pretense of legal justfiability (warrants etc), the fact is that they are not even attempting to choose people by any discernable metric other than their skin color. So it is objectively not about the enforcement of the law, it is about stochastic terrorism and ethnic cleansing, as that is the only thing their actions consistently demonstrate.
Can you explain more how you reached the conclusion that the enforcement of immigration is meant to be resolved in courts? Parking is not a criminal issue, does it also mean that I need a court order to tow a car blocking my driveway? Building code is not a criminal issue, does it mean I need a court order to install a power outlet? What about car licensing, do you go to court for new tags or to DMV/whatever is your state agency for that? Insurance? Any regulation, really?
It's exactly because this is not a criminal issue, the due process in immigration does not require court hearing, bails etc. The immigration court is not an Article 3 court, it could as well be named "immigration adjudication department" because it's an Executive office. If you believe you had been wronged in the immigration process then you can try to sue the government for the damages in an actual civil court, but the law does not require the government to sue you in order to enforce the immigration laws.
Your bad-faith argument does not merit a lengthy reply so I will simply say that the way this has been handled for DECADES has been to do so by sending formal notice, having court hearings to determine whether someone should be deported, THEN deporting them. The way things are being done now is the gestapo simply identify people with brown skin (now "legal" in a technical sense due to a corrupt SCOTUS ruling but ACTUALLY UNCONSTITUTIONAL and IMMORAL in reality) and shipping them to concentration camps and/or countries they have no relation to, to be used as slave labor in quid-pro-quo arrangements with foreign entities. No due process in all of that equates to cruel and unusual extrajudicial punishment and in some cases, death. NOTE that the lack of due process or even warrants or reasonable search requirements means that this CAN HAPPEN and IS HAPPENING to US citizens - to whom immigration enforcement should never even apply. NOTE that the lack of guardrails granted by SCOTUS that empowers ICE (the "E" stands for enforcement) to act as in lieu of an actual JUDICIAL process.
Next time, try not to be a nazi. All people are equally deserving of basic human rights, and that includes not being racially profiled and rounded up like slave meat for the grinder just because of the color of their skin.
Technology should make our lives better. Whether it's social media, AI, or nuclear power, if we introduce technology that even with its incredible benefits ends up causing harm or making life worse for many people (as all of the above have in various ways), we should reconsider. This should be self-evident. That doesn't mean we get rid of the technology, but we refine. Chernobyl didn't mean the world got rid of nuclear power. It meant we became more responsible with it.
Anyway there is a name for your kind of take. It is anti-humanist.
My exact thoughts, reading this headline. What an interesting legacy to leave behind! I never thought about the person behind DoA but reading the twitter thread from his rival, he seems to have been a very interesting character.
It may be nationalist, but not because it's showing American industry and agriculture. All nations have an intrinsic self-interest in such things... there is no nation on Earth now or in the past that would take the stance which you imply is the only acceptable one - a disregard for their own productivity, wealth, and self-sufficiency.
reply