In my mid 20s, I ran the SF Hardware Meetup, and Lee came and just told me something like: "Oh yea, I've been into hardware for a long time.", and only later did I realize who he was haha.
Like others here, I was concerned seeing his name trending here, and I'm so glad he's still alive.
Lee represents the best of mentalities of the tech scene, and I hope we can get back to a more pro-social place and away from this profit-first bubble shit.
Arguably LLMs are both (1) far easier to switch between models than it is today to switch from AWS / GCP / Azure systems, and (2) will be rapidly decreasing switching costs for your legacy systems to port to new ones - ie Oracle's, etc. whole business model.
Meanwhile, the whole world is building more chip fabs, data centers, AI software/hardware architectures, etc.
Feels more like we're headed to commodification of the compute layer more than a few giant AI monopolies.
And if true, that's actually even more exciting for our industry and "letting 100 flowers bloom".
Feels like, while there are people (like me) who are childfree by choice, nearly everyone I meet wants kids (RIP my dating life...), and it seems more economics / pessimism that prevents people from having kids.
Perhaps because regardless of some policies targeting the economics/material issues of having kids that some countries have put in place (Russia, Hungary, S. Korea, etc.), those policies don't change the brutal cultural situations in those countries: 2 authoritarian regimes + extreme competitiveness of S. Korea.
It seems like the neoliberal era growth model has long since broken down, and our politics haven't adapted with deeper reforms to reflect the latest technologies, etc. that could produce more material abundance than today (see: China's rise).
As a left accelerationist, I can't help but think that a society with a 4 day week, no precarity, cheap access to good housing, healthcare, education, etc. would increase the social trust and the birth rate, as most people want kids and want them to grow up in a thriving society.
People choosing to be “childfree” are not the only (or maybe even primary factor) in total fertility rates being below replacement rate. Increase in 1 child families also heavily contributes (technically decreases in 3+ child families contributes since TFR is 2.1).
Yea, I was hopeful seeing that, but was also pretty skeptical that they really had done it, as that demo was easy to fake, and I saw no 3rd party verification.
I tried to do a little digging recently, and didn’t find much outside of 2000-2015. I agree with you though, and would jump at the opportunity to work on that project.
Yea totally - like those Black Mirror-esque (and South Park?) videos of people having AI talk to their partner about deep relationship stuff.
We just built a mechanical parts AI search engine [1], and a lot of what it does it just get the best options clustered together, and then give the user the power to do the deeper engineering work of part selection in a UI that makes more sense for the task than a chat UI.
Feels like this pattern of "narrow to just the good options, but give the user agency / affordances" is far better.
> people having AI talk to their partner about deep relationship stuff.
I have read stories about people using AI to write their Tinder messages, eulogies, etc.
I'm optimistic that it's easier to find/solve vulnerabilities via auto pen-testing / patching, and other security measures, than it will be to find/exploit vulnerabilities after - ie defense is easier in an auto-security world.
Does anyone disagree?
This is purely my intuition, but I'm interested in how others are thinking about it.
All this with the mega caveat of this assuming very widespread adoption of these defenses, which we know won't be true and auto-hacking may be rampant for a while.
If you can compromise an employee desktop and put a too-cheap-to-meter intelligence equivalent to a medium-skilled software developer in there to handcraft an attack on whatever internal applications they have access to, it's kind of over. This kind of stuff isn’t normally hardened against custom or creative attacks. Cybersecurity rests on bot attacks having known signatures, and sophisticated human attackers having better things to do with their time.
I've also thought this for scam perpetration vs mitigation. An AI listening to grandma's call would surely detect most confidence or pig butchering scams (or suggest how to verify), and be able to cast doubt on the caller's intentions or inform a trusted relative before the scammer can build up rapport. Security and surveillance concerns notwithstanding.
In general, most modern vulnerabilities are initially identified with fuzzing systems under abnormal conditions. Whether these issues may be consistently exploited can be probabilistic in nature, and thus repeatability with a POC dataset is already difficult.
That being said, most modern exploits are already auto-generated though brute-force, as nothing more complex is required.
>Does anyone disagree?
CVE agents already pose a serious threat vector in and of itself.
1. Models can't currently be made inherently trustworthy, and the people claiming otherwise are selling something.
"Sleeper Agents in Large Language Models - Computerphile"
2. LLMs can negatively impact logical function in human users. However, people feel 20% more productive, and that makes their contributed work dangerous.
3. People are already bad at reconciling their instincts and rational evaluation. Adding additional logical impairments is not wise:
4. Auto merging vulnerabilities into opensource is already a concern, as it falls into the ambiguous "Malicious sabotage" or "Incompetent noob" classifications. How do we know someone or some models intent? We can't, and thus the code base could turn into an incoherent mess for human readers.
Mitigating risk:
i. Offline agents should only have read-access to advise on identified problem patterns.
ii. Code should never be cut-and-pasted, but rather evaluated for its meaning.
iii. Assume a system is already compromised, and consider how to handle the situation. In this line of reasoning, the policy choices should become clear.
> I'm optimistic that it's easier to find/solve vulnerabilities via auto pen-testing / patching, and other security measures, than it will be to find/exploit vulnerabilities after - ie defense is easier in an auto-security world.
I somewhat share the feeling that this is where it's going, but not sure if fixing will be easier. In "meatbag" red vs. blue teams, reds have it easier as they only have to make it once, blue has to always be right.
I do imagine something adversarial being the new standard, though. We'll have red vs blue agents that constantly work on owning the other side.
In open source codebases perhaps, either because big tech would be generous enough to run and generate PRs(if they are welcome ) for those issues.
In proprietary/closed source it depends on ability to spend the money these tools would end up costing.
As there is more and more vibe coded apps there will be more security bugs because app owners just don’t know better or don’t care to fix them .
This happened when rise of Wordpress and other cmses and their plugin ecosystem or languages like early PHP or for that matter even C opened up software development to wider communities.
In many small companies (e.g. startups), the attackers are far more experienced and skilled than are the defenders. For attacking specific targets, they also have the leisure of choosing the timing of the attack - maybe the CTO just boarded a four hour flight?
Though it's possible that generating the STEP first is easier to do, and that the plan could be backporting the feature tree using another method / model would then enable editing.
Yes, it would seem post hoc feature tree requires the constraints that come from context in your head, but I could imagine that for most cases a "drafter's intuition" in AI may be sufficient, and you could build an interface to allow that to be mostly given up front and then through iterate post generation.
I could imagine the stepwise approach may allow AI training to be more constrained / efficient that trying to do the whole thing in one go.
Murex were the shells whose excretions were used to make the Tyrian purple of the Mediterranean. Tyrian referring to Tyre, one of the major Phoenician city-states.
It was so iconic that the "Punic Wars" are called that because Punic = Phoenicia = "Purple People".
Carthage was the Phoenician colony that outlasted the home country.
Also, the Phoenicians were the descendants of the Canaanites, who (according to one etymological theory) are also named after the color purple.
The Phoenicians were a semitic people like the Jews, and they gave the world its first alphabet which was adopted by both the Hebrews and the Greeks. The Greeks added vowels, and the Romans adopted that alphabet and it became roughly the one we use today.
If you go to the Wiki page (https://en.m.wikipedia.org/wiki/Phoenician_alphabet) and scroll down to the Table of Letters header, you can see how the letters evolved from Egyptian hieroglyphs to the letters we use today. It’s particularly interesting to me that our letter “B” (which the greeks called “beta” and which forms the tail end of “alphabet”) was originally a house, and the semitic languages called it “bēt” which was their word for house, which you can still see today in Biblical place names like Bethel (house of God—“El” was a very old name for God).
It's interesting how, unlike Sumerian cuneiform or Egyptian hieroglyphs that were complex systems that came from dedicated scribes of the court, Phoenicia's alphabet was the kind of pragmatic system you can imagine a more mercantile society developing.
It's wild that it turned into the scripts: Latin, Greek, Arabic, Cyrillic, Hebrew, and beyond.
Also interesting is Chinese script, which was saved from this by Stalin telling Mao that China should keep its unique writing, which Russia of course was already doing. Mao did do the simplification, but he turned away from his previous plan to standardize the latin script for Chinese.
Murex also has significant religious significance to Jews. It is the source of the biblically mandated blue threads for four cornered garments: https://en.wikipedia.org/wiki/Tekhelet
In my mid 20s, I ran the SF Hardware Meetup, and Lee came and just told me something like: "Oh yea, I've been into hardware for a long time.", and only later did I realize who he was haha.
Like others here, I was concerned seeing his name trending here, and I'm so glad he's still alive.
Lee represents the best of mentalities of the tech scene, and I hope we can get back to a more pro-social place and away from this profit-first bubble shit.
reply