I think it's imprecise to say that opening knowledge is unnecessary. What is unnecessary is opening theory, or more specifically, rote memorisation of opening lines.
This is different from opening understanding. Understanding the importance of tempo, development, controlling the centre, the different pawn structures, middle games and endgames that result from different openings, the plans and motifs typical in various opening complexes. Any late beginner to intermediate player needs to pick and study an opening. The problem is that instead of studying the opening, players try to memorise lines without improving their understanding of the resulting middlegames, and the plans they should be playing for. Then, when their opponent diverges from the main lines(which in my experience happens in 99% of games between players below 2000, because it's very rare that both players have memorised the same long line), they don't know what to do.
I'm a 1900 FIDE player, I have an opening repertoire of sorts. For instance I play the modern benoni with black. An extremely theoretical opening, and yet I have only a small handful of longer lines actually memorised, because they're simply too complex for me to figure out over the board(e.g the b5 lines against Bd3 h3 Nf3 setups). But what I have studied extensively is the strategic landscape of the benoni, games by strong players in the opening, etc. And I have years of experience playing the opening. I know what kind of exchanges typically favour me, or my opponent, what pawn breaks each player should be trying for. And all of that knowledge is crucial for me to get anything out of the opening. I have beaten players tactically much stronger than me in this opening simply because my understanding of this specific opening was better than theirs.
Tactical ability is obviously important, but it's definitely not everything.
In general I certainly wouldn't disagree with this, it's what I was alluding to with general ideas that stick with you. But I'd call this a different thing than opening study. For instance one can get Benoni like structures in the King's Indian, Benko, English, Nimzo, and more! And so it's not really understanding the opening, but understanding how to play a certain structure that arises in many different openings.
And it has nothing to do with memorization. I mean you mentioned the b5 stuff against Bd3/h3/Nf3 setups. You might not be able to calculate the depth of what happens if white manages to hold onto his extra pawn, but you can certainly calculate to at least the point of 'okay, I'm getting my pawn back in most lines, disrupting his center, and getting my play going. if the one line where he holds onto it (Bxb5 stuff) then he's going to have a bit of difficulty castling, his pieces look disorganized, his extra pawn and b2 both look weak.' That's more than enough on general principles to go for the sac I think.
It's typical in these situations that the price per stock is negotiated, with current SP as a starting point. It's fairly unusual, I think, for the company selling stock to get a price significantly higher than market price. It's more typical that there's a slight discount. At least that's been the case for every stock I owned where dilution has occured. We also don't know yet when exactly this deal was negotiated and approved, so it's hard to actually say. Considering where INTC has been very recently(below $20), $23.28 seems very reasonable to me.
The reason the stock surged up past $30 is the general market's reaction to the news, and subsequent buying pressure, not the stock transaction itself. It seems likely that once the exuberance cools down, the SP will pull back, where to I can't say. Somewhere between $25 and $30 would be my bet, but this is not financial advice, I'm just spitballing here.
The thing is, ChatGPT isn't really designed at all. It's hobbled together by running some training algorithms on a vast array of stolen data. They then tacked on some trivially circumventable safeguards on top for PR reasons. They know the safeguards don't really work, in fact they know that they're fundamentally impossible to get to work, but they don't care. They're not really intended to work, rather they're intended to give the impression that the company actually cares. Fundamentally, the only thing ChatGPT is "designed" to do is make OpenAI into a unicorn, any other intent ascribed to their process is either imaginary or intentionally feigned for purposes of PR or regulatory capture.
I have to say, when I see a post by a company like OpenAI about "safety, freedom and privacy", I can't keep a straight face. They might as well title the piece "If you don't mind, we'd like to gaslight you across several paragraphs". No thanks.
I'm sure it's true and all. But I've been hearing the same claim about all those tools uv is intended to replace, for years now. And every time I try to run any of those, as someone who's not really a python coder, but can shit out scripts in it if needed and sometimes tries to run python software from github, it's been a complete clusterfuck.
So I guess what I'm wondering is, are you a python guy, or are you more like me? because for basically any of these tools, python people tell me "tool X solved all my problems" and people from my own cohort tell me "it doesn't really solve anything, it's still a mess".
I'm about the highest tier of package manager nerd you'll find out there, but despite all that, I've been struggling to create/run/manage venvs out there for ages. Always afraid of installing a pip package or some piece of python-based software (that might muck up Python versions).
I've been semi-friendly with Poetry already, but mostly because it was the best thing around at the time, and a step in the right direction.
I'm (reluctantly) a python guy, and uv really is a much different experience for me than all the other tools. I've otherwise had much the same experience as you describe here. Maybe it's because `uv` is built in rust? ¯\_ (ツ)_/¯
But I'd also hesitate to say it "solves all my problems". There's plenty of python problems outside of the core focus of `uv`. For example, I think building a python package for distribution is still awkward and docs are not straightforward (for example, pointing to non-python files which I want to include was fairly annoying to figure out).
As a mainly Python guy (Data Engineering so new project for every ETL pipeline = a lot of projects) uv solved every problem I had before with pip, conda, miniconda, pipx etc.
Yes. Because it will decrease the legitimate traffic online that is encrypted, which makes it easier to pick out encrypted channels from the noise. A few listeners at key nodes in the country's communications network to flag encrypted signals for investigation or simple disruption and you're G2G.
It's the "If you ban guns, only criminals will have guns" theory, except the other side of that coin is "It's real easy to see who the criminals are if guns are banned: they're the folks carrying guns."
How do you filter encrypted channels from the noise? For example, say the criminals now communicate by having a browser extension write e2ee encrypted todo items on a shared todo list app.
Sure, you could make unauthorized, fully encrypted communication illegal. But what would be the punishment for using it? Worse than for smuggling, human trafficking, murder? I seriously doubt it. If you're a criminal risking decades in prison for major crimes, using some illegal software is 100% worth it, if it significantly reduces the risk of getting caught for the real crimes you're committing.
You can't make laws that govern how criminals behave. All chat control will really accomplish is maybe a momentary string of arrests(which is meaningless in the long term; there's always someone to take over), and longer term, worse privacy and security for everyone except the criminals.
UK has the idea of contempt of court. Even as it stands, the court can demand you submit some evidence - say an encryption key for a document. And if you refuse, they can even imprison you until you surrender the key.
Another principle is that when someone is destroying evidence, you can presume it contained incriminating evidence.
I think you could make the punishment proportional to the presumed crime.
This is the conclusion I come to whenever I try to grasp the works of Nagel, Chalmers, Goff, Searle et al. They're just linguistically chasing their own tails. There's no meaningful insight below it all. All of their arguments, however complex, all rely on poorly defined terms like "understand" "subjective experience", "what it is like", "qualia", etc. And when you try to understand the arguments with the definition of these terms left open, you realise the arguments only make sense when the terms include in their definition a supposition that the argument is true. It's all just circular reasoning.
“The Feeling of What Happens” by Antonio D’Amasio, a book by a neuroscientist some years ago [0], does an excellent job of building a framework for conscious sensation from the parts, as I recall, constructing a theory of “mind maps” from various nervous system structures that impressed me with a sense that I could afterwards understand them.
As a radical materialist, the problem with ordinary materialism is that it boils down to dualism because some types matter (e.g. the human nervous system) give rise to consciousness and other types of matter (e.g. human bones) do not.
Ordinary materialism is mind-body/soul-substance subjectivity with a hat and lipstick.
Human bones most definitely do contribute to feeling, but not through logos. The book expands upon the idea of mind body duality to merge proprioception and general perception.
I’d bet bats would enjoy marrow too if they could.
So how does a radical materialist explain consciousness- that it is too is a fundamental material phenomena? If so are you stretching the definition of materialism?
I find myself believing in Idealism or monism to be the fundamental likelihood
well the hard problem of consciousness gets in the way of that
- I assume you as a materialist you mean our brain carries consciousness as a field of experience arising out of neural activity (ie neurons firing, some kind of infromation processing leading to models of reality simulated in our mind leading to ourselves feeling aware) ie that we our awareness is the 'software' running inside the wetware.
That's all well and good except that none of that explains the 'feeling of it' there is nothing in that 3rd person material activity that correlates with first person feeling. The two things, (reductionist physical processes cannot substitute for the feeling you and I have as we experience)
This hard problem is difficult to surmount physically -either you say its an illusion but how can the primary thing we are, we expereince as the self be an illusion? or you say that somewhere in fields, atoms, molecules, cells, in 'stuff; is the redness of red or the taste of chocolate..
whenever I see the word 'reductionist', I wonder why it's being used to disparage.
a materialist isn't saying that only material exists: no materialist denies that interesting stuff (behaviors, properties) emerges from material. in fact, "material" is a bit dated, since "stuff-type material" is an emergent property of quantum fields.
why is experience not just the behavior of a neural computer which has certain capabilities (such as remembering its history/identity, some amount of introspection, and of course embodiment and perception)? non-computer-programming philosophers may think there's something hard there, but they only way they can express it boils down to "I think my experience is special".
Because consciousness itself cannot be explained except through experience ie consciousness (ie first person experience) - not through material phenomena
It’s like explaining music vs hearing music
We can explain music intellectually and physically and mathematically
But hearing it in our awareness is a categorically different activity and it’s experience that has no direct correlation to the physical correlates of its being
Up to a point I agree, but when someone deploys this vague language in what are presented as strong arguments for big claims, it is they who bear the burden of disambiguating, clarifying and justifying the terms they use.
I don't agree that the inherent nebulousness of the subject extends cover to the likes of Goff, Chalmers (on pansychism), or Searle and Nagel (on the hard problem). It's a both can be true situation and many practicing philosophers appreciate the nebulousness of the topic while strongly disagreeing with the collective attitudes embodied by those names.
If he were capable of describing subjective experience in words with the exactitude you're asking for, then his central argument would be false. The point is that objective measures, like writing, are external, and cannot describe internal subjective experience. Its one thing to probe the atoms; its another thing to be the atoms themselves.
Basically his answer to the question "What is it like to be a bat?" is that its impossible to know.
>This is the conclusion I come to whenever I try to grasp the works of Nagel, Chalmers, Goff, Searle et al. They're just linguistically chasing their own tails.
I do mostly agree with that and I think that they collectively give analytic philosophy a bad name. The worst I can say for Nagel in this particular case though is that the whole entire argument amounts to, at best, an evocative variation of a familiar idea presented as though it's a revelatory introduction of a novel concept. But I don't think he's hiding an untruth behind equivocations, at least not in this case.
But more generally, I would say I couldn't agree more when it comes to the names you listed. Analytic philosophy ended up being almost completely irrelevant to the necessary conceptual breakthroughs that brought us LLMs, a critical missed opportunity for philosophy to be the field that germinates new branches of science, and a sign that a non-trivial portion of its leading lights are just dithering.
Don't agree with this kind of linguistic dismissal. It doesn't change the fact that we have sensations of color, sound, etc. and there are animals that can see colors, hear sounds and detect phenomena we don't. It's also quite possible they experience the same frequencies we see or hear differently, due to their biological differences. This was noted by ancient skeptics when discussing the relativity of perception.
That is what is being discussed using the "what it's like" language.
I like the more specific versions of those terms: the feeling of a toothache and the taste of mint. There's no need to grasp anything, they're feelings. There's no feeling when a metal bar is bent by a press.
As a player I like to think of sharpness as a measure of the potential consequences of a miscalculation. In a main line dragon, the consequence is often getting checkmated in the near future, so maximally sharp. In a quiet positional struggle, the consequence might be something as minor as the opponent getting a strong knight, or ending up with a weak pawn.
Whereas complexity is a measure of how far ahead I can reasonably expect to calculate. This is something non-players often misunderstand, which is why they like to ask me how many moves ahead I can see. It depends on the position.
And I agree, these concepts are orthogonal. Positions can be sharp, complex, both or neither. A pawn endgame is typically very sharp; the slightest mistake can lead to the opponent queening and checkmating. But it's relativity low in complexity because you can calculate far ahead using ideas like counting, geometric patterns(square of the pawn, zone of the pawn, distant opposition etc) to abstract over long lines of play. On the opposite side, something like a main line closed Ruy Lopez is very complex(every piece still on the board), but not especially sharp(closed position, both kings are safe, it's more of a struggle for slight positional edges).
Something like a king's indian or benoni will be both sharp and complex. Whereas an equal rook endgame is neither(it's quite hard to lose a rook endgame, there always seems to be a way to save a draw).
To add to this, Java and other GC languages in some sense have manual memory management too, no matter how much we like to pretend otherwise.
It's easy to fall into a trap where your Banana class becomes a GorillaHoldingTheBananaAndTheEntireJungle class(to borrow a phrase from Joe Armstrong), and nothing ever gets freed because everything is always referenced by something else.
Not to mention the dark arts of avoiding long GC pauses etc.
It's possible to do this in rust too, I suppose. The clearest difference is that in rust these things are explicit rather than implicit. To do this in rust you'd have to use 'static, etc. The other distinction is compile-time versus runtime, of course.
> The clearest difference is that in rust these things are explicit rather than implicit. To do this in rust you'd have to use 'static, etc.
You could use 'static, or you can move (partial) ownership of an object into itself with Rc/Arc and locking, causing the underlying counter to never return to 0. It's still very possible to accidentally hold on to a forest.
> It's easy to fall into a trap where your Banana class becomes a GorillaHoldingTheBananaAndTheEntireJungle class(to borrow a phrase from Joe Armstrong), and nothing ever gets freed because everything is always referenced by something else.
Can you elaborate on this? I'm struggling to picture a situation in which I have a gorilla I'm currently using, but keeping the banana it's holding and the jungle it's in alive is a bad thing.
The joke is you're using the banana but you didn't actually want the gorilla, much less the whole jungle. E.g. you might have an object that represents the single database row you're doing something with, but it's keeping alive a big result set and a connection handle and a transaction. The same thing happening with just an in-memory datastructure (e.g. you computed some big tree structure to compute the result you need) is less bad, but it can still impact your memory usage quite a lot.
This is different from opening understanding. Understanding the importance of tempo, development, controlling the centre, the different pawn structures, middle games and endgames that result from different openings, the plans and motifs typical in various opening complexes. Any late beginner to intermediate player needs to pick and study an opening. The problem is that instead of studying the opening, players try to memorise lines without improving their understanding of the resulting middlegames, and the plans they should be playing for. Then, when their opponent diverges from the main lines(which in my experience happens in 99% of games between players below 2000, because it's very rare that both players have memorised the same long line), they don't know what to do.
I'm a 1900 FIDE player, I have an opening repertoire of sorts. For instance I play the modern benoni with black. An extremely theoretical opening, and yet I have only a small handful of longer lines actually memorised, because they're simply too complex for me to figure out over the board(e.g the b5 lines against Bd3 h3 Nf3 setups). But what I have studied extensively is the strategic landscape of the benoni, games by strong players in the opening, etc. And I have years of experience playing the opening. I know what kind of exchanges typically favour me, or my opponent, what pawn breaks each player should be trying for. And all of that knowledge is crucial for me to get anything out of the opening. I have beaten players tactically much stronger than me in this opening simply because my understanding of this specific opening was better than theirs.
Tactical ability is obviously important, but it's definitely not everything.