Hacker Newsnew | past | comments | ask | show | jobs | submit | strongpigeon's commentslogin

I think there is a lot that was well done in the Vista UI, but I find the gradients on the buttons and the task bar to be too hard.

I’m pretty sure they’re expressing confusion at the chart stating that salmon are lighter than rats.


That reminds me of when I was living right by the BLM protests/CHOP [0] in Seattle and got tear gas in my condo. I had just bought some new coffee beans to try out and when I did the next morning, thought they tasted super "chemical-y" and immediately threw them aways.

Turns out tear gas is known to seep into food items, especially porous food like coffee and bread [1]. Not surprised at all that VOCs linger in reservoirs as mentioned in the article.

[0]: https://en.wikipedia.org/wiki/Capitol_Hill_Occupied_Protest

[1]: https://www.propublica.org/article/tear-gas-is-way-more-dang...


> That reminds me of when I was living right by the BLM protests/CHOP [0] in Seattle and got tear gas in my condo. I had just bought some new coffee beans to try out and when I did the next morning, thought they tasted super "chemical-y" and immediately threw them aways.

This is by far the most Seattle thing I have ever read.


My understanding is that this will be removed in iOS 27. Given how Apple has behaved in the past, I wouldn't be surprised if they really did it.


Yeah, that would suck. The designer I'm working with, is already projectile-vomiting over LG. I think he'd quit, if I insisted that he help me to transition to it (we're a volunteer team).


My understanding Apple will remove it sometime next year April.


I'm hoping that some of the senior management will realize what a clusterf**k this is, and let it stay (they still support ObjC apps, and I will bet that lots of AAA apps can't be easily converted to LG).

The thing that we have to keep in mind, is that some very "strong-willed" folks have staked their egos on LG, and will choose it as their hill to die on. We've seen that happen in many other instances (not just at Apple).


Is it me or it feels like the "empirical argument" is correct (it is an observation after all), but the "theoretical argument" wildly off?

My understanding is that different levels of cache and memory are implemented pretty differently to optimize for density/speed. As in, this scaling is not the result of some natural geometric law, but rather because it was designed this way by the designers for those chips to serve the expected workloads. Some chips, like the mainframe CPUs by IBM have huge caches, which might not follow the same scaling.

I'm no performance expert, but this struck me as odd.


The theoretical argument seems sound, but it does ignore there are massive constant factors in current implementations beyond just the theoretical limit alone (particularly cost and heat) and skips directly explaining why those end up having similar growth rates.

The actual formula used (at the bottom of the ChatGPT screenshot) includes corrections for some of these factors, without them it'd have the right growth rate but yield nonsense.


I guess you're right in a purely geometric sense. It's just that it seems almost silly to consider that given that (AIUI) the 3D geometric constraints don't impact the memory access latency at all for now (and likely for any reasonable period of time).

Like you said, thermal and cost constraints dwarf the geometrical one. But I guess my point is that they make it a non-issue and therefore isn't a sound theoretical explanation as to why memory access is O(N^[1/3]).


Should thermal and cost constraints at scale not also tend to relate to the volume of the individual components in the same way (ignoring constant factors) as the growth factors for an idealized memory structure around the CPU itself? In a more literal sense: the size and quantity of transistors (or other alternative units) also describe the cost, heat dissipation, and volume of the memory simultaneously. Tweaking any of the parameters still ultimately results in a "how much can we handle in that volume of product" equation, which will be the ultimate bound.

The difference is we spread them out into differently optimized volumes instead of build a homogenous cube, which is (most likely IMO) where most of the constant factors come from.

I think this is the part the article glossed over to just get to showing the empirical results, but I also don't feel it's an inherently unreasonable set of assumptions. At the very least, matches what the theoretical limit would be in the far future even if it were to only happen to coincidentally match current systems for other reasons.


Thermal is a huge issue because Dennard Scaling has been dead for a long time. We are kind of limping along with Moore but anything that looks like Dennard is going to involve a change of materials or new chemistry.

I get the impression that backside power was the last big Dennard-adjacent win, and that’s more about relocating some of the heat to a spot closer to the chip surface, which gives thermal conductivity a boost since the heat has to move a shorter distance to get out. I think that leaves gallium, optical communication to create less heat in the bulk of the chip, and maybe new/cheaoer silicon on insulator improvements for less parasitic loss? What other things are still in the bag of tricks? Because smaller gates isn’t.


The speed of light is roughly 30cm/ns. So accessing main memory 15cm away on a motherboard is about 0.5ns slower than cache, no matter whether the main memory is DRAM or SRAM. That's 2 clock cycles extra at 4GHz, which is a tiny fraction of the actual time (somewhere around 100ns) but not negligible. 0.5% or so is just enough that I'd say it can matter. Particularly since larger computers end up putting some of their RAM further away.


geometric constraints don't impact the memory access latency at all for now

I don't know; every time Intel/AMD increase cache size it also takes more cycles. That sounds like a speed of light limit.


How you gonna pack bits onto a physical chip except to put them into a cube? What’s the longest path in a cube? What’s the average path length in a cube? They’re all functions of the surface area of the cube.


Yes, but that's not at all how we're packing bits into physical chips right now. The fact that access latency is O(N^[1/3]) has nothing to do with this, it's just that the relative size of caches and memories have been designed that way.

That this latency is of the same order as if you were putting bits in a cube is more coincidental than anything.


A bound can exist due to multiple factors. If you fixed the other bound, speed of light would still dictate a growth rate higher that the volume of a cube.


Hypercube, e.g. the Connection Machine: https://www.tamikothiel.com/theory/cm_txts/index.html


That is artistic license, and you know it. Do you have access to a tesseract? If so why the hell are you on HN?

Edit to add: And on further reflection, this won’t save you. Because the heat production in the hypercube is a function of internal volume, but the heat dissipation is a function of the interface with 3D space, so each hypercube is limited in size, and must then be separated in time or space to allow that heat to be transferred away, which takes up more than O(n^(1/3)) distance and distance means speed of light delays.


Sounds like you've got it all figured out, all right.


There is no beating the speed of light. You cannot stack servers like cordwood. So no matter what toroidal internetworking architecture you make the maximum interconnect length is always, always a function of the dimensions of the devices and typically with a constant factor of. 5x to 10x on top.

Star interconnects are limited to the surface area of each device, which is the cube root of the volume. Because you have to have space to plug the wires in.

“You’ve got it all figured out” is deflecting basic physics facts. What is your clever solution to data center physics?


I think a lot of it is a reaction to the hype before the launch of GPT-5. People were sold and were expecting a noticeable big step (akin to GPT 3.5-4), but in reality it's not that much noticeably better for the majority of use cases.

Don't get me wrong, I actually quite like GPT-5, but this is how I understand the backlash it has received.


Yeah that is fair. I admit to being a bit bummed out as well. One might almost say that if O3 was effectively GPT5 in terms of performance improvement, that we were all really hoping for a GPT6, and that's not here yet. I am pretty optimistic, based on the information I have, that we will see GPT6-class models which are correspondingly impressive. Not sure about GPT-7 though.


Honestly, I’m skeptical of that narrative. I think AI skeptics were always going to be shrill about how it was overhyped and thus this proves how right they were! Seriously, how good would GPT5 have had to be in order for Ed to NOT write this exact post?

I’m very happy with GPT5, especially as a heavy API user. It’s very cost effective for its capabilities. I’m sure GPT6 will be even better, and I’m sure Ed and all the other people who hate AI will call it a nothing burger too. So it goes.


Google's biggest problem in my opinion (and I'm saying that as an ex-googler) is that Google doesn't have a product culture. Google had the tech for something like ChatGPT for a long time, but couldn't come up with that product. Instead it had to rely on another company showing it the way and then copy them and try to out-engineer them...

I still think ultimately (and somewhat sadly) Google will win the AI race due to its engineering talent and the sheer amount of data it has (and Android integration potential).


> is that Google doesn't have a product culture.

This is evident in Android and the pixel lineup, which could be my favorite phone if not for some of the most baffling and frustrating decisions that lead to a very weirdly disjointed app experience (comparing to something like iOS's first party tools).

Like removing location based reminders from google tasks, for some reason? Still no apple shortcuts-like automation built-in, keep can still do location based reminders but it's a notes app so which am I supposed to use? Google tasks or keep? Well, gemini adds reminders to google tasks and not keep if I wanted to use keep primarily.

If they just spent some time polishing and integrating these tools, and add some of their ML magic to it they'd blow Apple out of the park.

All of Google's tech is cool and interesting, from a tech standpoint but it's not well integrated for a full consumer experience.


I still can't fathom how one of my favorite Android features simply disappeared years ago: the 'time to leave' notification for calendar appointments with address info.


Google recently let go ALL -- EVERY SINGLE -- L3/L4/L5 UX Researcher

https://www.thevoiceofuser.com/google-clouds-cuts-and-the-bi...

Could it be argued that perhaps UX Research was not working at all? Or that their recommendations were not being incorporated? Or that things will get even worse now without them?


The link says:

> Some teams in the Google Cloud org just laid off all UX researchers below L6

That’s not all UX researchers below L6 in the entire company. It doesn’t even sound like it’s all UX researchers below L6 in Google Cloud.


Maybe Apple should follow suit.. I jest, but I’m still processing the liquid glass debacle.


At least it's uniform. Unlike Material 3 expressive which might look different depending on the app, or not be implemented at all, or only half implemented in some of Google's own apps even, much like with every other Android redesign.

I get Google can't force it on all the OEMs with their custom skins, but they can at least control their own PixelOS and their own apps.


It’s not uniform at all. Some parts of the interface and of their apps get it, others don’t. Some parts look more glassy, some more frosty. It’s all over the place in terms of consistency. It’s also quite different between Apple’s various OSs, although allegedly the purpose was to unify their look.


And even when it does copy other products, it seems to be doing a terrible job of them.

Google's AI offering is a complete nightmare to use. Three different APIs, at least two different subscriptions, documentation that uses them interchangeably.

For Gemini's API it's often much simpler to actually pay OpenRouter the 5% surchargeto BYOK than deal with it all.

I still can't use my Google AI Pro account with gemini-cli..


Then there's the billing dashboards...

It's amazing how they can show useless data while completely obfuscating what matters.


Yeah, the whole billing death march is what ended up making me pick OpenAI as my main worhorse instead of GOOG.

Not enough brain cycles to figure out a way to give Google money, whereas the OpenAI subscription was basically a no-brainer.


As of this week you can use gemini-cli with Google AI Pro


I had great fun this week with the batch API. A good morning lost trying to work out how to do a not particularly complex batch request via JSONL.

The python library is not well documented, and has some pretty basic issues that need looking at. Terrible, unhelpful errors, and "oh, so this works if I put it in camel-case" sort of stuff.


litellm + gemini API key?

I find Gemini is their first API that works like that. Not like their pre-Gemini vision, speech recognition, sheets etc.. Those were/are a nightmare to set up indeed.


To be fair, according to OpenAI they started ChatGPT as a demo/experiment and were taken by surprise when it went viral.

It may well be that they also didn't have a product culture as an organization, but were willing to experiment or let small teams do so.

It's still a lesson, but maybe a different one.

With organizational scale it becomes harder and harder to launch experiments under the brand. Red tape increases, outside scrutiny increases. Retaining the ability to do that is difficult.

Google does experiment a fair bit (including in AI, e.g. NotebookLLM and its podcast feature are I think a standout example of trying to see what sticks) but they also tend to try to hide their experiments in developer portals nowadays, which makes it difficult to get a signal from a general consumer audience.


If I can take a slight tangent. This is what I will remember OpenAI for. Not the Closed vs Open debate. They caused the democratization of access to AI models. Prior to ChatGPT, I would hear about these great models Deep Mind and Google were developing. They'd always stay closed behind the walls of Google.

OpenAI forced Google to release and as a result, we have all of the AI tooling, integrations, and models. Meta's leaning into the stolen Llama code took this further and sparked the Open Source LLM revolution (in addition to the myriad contributors and researchers who built on that).

If we had left it to Google, I suspect they'd release tooling (as they did with TensorFlow) but not an LLM that might compete with their core product..


According to Karen Hao's Empire of AI, this is only half accurate. And I trust what Karen Hao says a lot more.

OpenAI mistakenly thought Anthropic was about to launch a chatbot, and ChatGPT was a scrappy, rushed-out-the-door product made from an intermediate version of GPT-4, meant to one-up them. Of course, they were surprised at how popular it became.


Do you mean an intermediate version of GPT-3? That's more the timeline I'm thinking.


Google is definitely good at experimenting (and yeah NotebookLLM is really cool), which is a product of the bottom-up culture. The lack of a consistent story with regard to AI products however is a testament to the lack of product vision from the top.


NotebookLM came out of Google Labs though, and in collaboration with outside stakeholders. I'm not sure I would call it a success of "bottom-up" culture, but a well realized idea from a dedicated incubator. That doesn't necessarily mean the rest of the company is so empowered or product oriented.


-> With organizational scale it becomes harder and harder to launch experiments under the brand

I feel like Google tried to solve for this with their `withgoogle.com` domain and it just ends up being confusing or worse still, frustrating when you see something awesome and then nothing ever comes of it.


I don't think Google was ever going to be the first to productize an LLM. LLMs say stupid shit - especially in the early days - and would've just attracted even more bad press if Google had been the front runner. OpenAI came along as a small, move-fast-and-break-things entity and introduced this tech to the public, and Google (and others) was able to join the fray after that seal was broken.


Good point, if Google had released the first version of Bard or whatnot as the first LLM it probably would've received some good press but also a lot of "eh just another Google toy side project". I could've seen myself saying that.


It would've joined the Google graveyard for sure.


This has plagued Google internally for decades. I’m reminder of Steve Yegge’s Google rant [1] from 14 years ago, and ChatGPT is evidence that they still haven’t fixed it.

It’s amazing how pervasive company cultures can be, and how this comes from the top, and can only be fixed with replacing leadership with an extremely talented CEO that knows the company inside out and can change its course. Nadella from Microsoft comes to mind, although that was more about Microsoft going back to its roots (replace sales oriented leadership with product oriented leadership again).

Google never had product oriented leadership in the same way that Amazon, Apple and Microsoft had.

I don’t think this will ever change at this point.

For those who haven’t read it, Steve Yegge’s rant about Google is worth your time:

1 https://gist.github.com/chitchcock/1281611


> Google doesn't have a product culture

Fair criticism that it took someone else to make something of the tech that Google initially invented, but Google is furiously experimenting with all their active products since Sundar's "code red" memo.


Well, they had an internal ethics team that told them that their technology was garbage. That can't help. The other guys' ethics teams are all like "Our stuff is too awesome for people to use. No one should have this kind of unbridled power. We must muzzle the beast before a tourist rides him" and Google's ethics team was like "our shit sucks lol this is just a Markov chain parrot doesn't do shit it's garbage".


Which, to be fair—we're talking about the pre-GPT-3.5 era—it kind of was?


Don't you remember all of the scaremongering around how unethical it would be to release a GPT3 model publicly.

Google personally reached out to someone trying to reproduce GPT3 and convinced him to abandon his plan of releasing it to the public.


There was scaremongering about releasing GPT-2.

GPT-2!!


You're right. I was remembering gpt2 and it was OpenAI that reached out. He was in contact with Google to get the training compute.

https://medium.com/@NPCollapse/the-hacker-learns-to-trust-62...


And here we are after deepseek and the qwen models and so so much more like glm 4.6 which are reaching sota of sorts.


I mean, the level of scams that have occurred that time due to LLMs have increased so it's not exactly wrong.


The unfortunate truth when you're on the cusp of a new technology: it isn't good yet. Keeping a team of guys around whose sole job it is to tell you your stuff sucks is probably not aligned with producing good stuff.


There's almost like an "uncanny valley" type situation with good products. As in new technologies start out promising, but less okay. Then as they get better they becomes close to being a "good project" the more it's not there yet. In that way it could feel sort of worse than a mediocre project. Until it's done.


There's a world of difference between saying "our stuff sucks" vs "here are the specific ways our stuff isn't ready for launch". The former is just whining, the latter is what a good PM does.


And we (average users) are really luck for that. Imagine a world where Google had been pushing AI products in the first place. OpenAI and other competitors would not stand a chance and it would have ads in 2024. They'd have captured hundreds of billions of value by now.

The fact that we had Attention Is All You Need was freely available online alone was unbelievably fortunate from hindsight.


OpenAI were the ones that came up with RLHF, which is what made ChatGPT viable.

Without RLHF, LLM-based chat was a psychotic liability.


Along with its engineering talent and resource scale, I think their in-house chips are one of their core advantages. They can scale in a way that their peers are going to struggle to match, and at much lower cost. Nvidia's extreme margins are Google's opportunity.


Didn't Google have Bard internally around the same time as ChatGPT?


Bard came out shortly after ChatGPT as a prototype of what would become Gemini-the-chatbot.

There were other, less-available prototypes prior to that.


Meena/Lamda were around the same time as gpt-2


Search for Meena from Google.


Most people might remember it from the headlines:

> In June 2022, LaMDA gained widespread attention when Google engineer Blake Lemoine made claims that the chatbot had become sentient. The scientific community has largely rejected Lemoine's claims...

From https://en.wikipedia.org/wiki/LaMDA


Yeah, that was my introduction to LLMs!


https://research.google/blog/towards-a-conversational-agent-...

Damn, that's crazy. Or at least in hindsight it is. I don't remember anything big deal being made about it back then.


Why sadly? I’d rather the originators of the technology win.


its a different skillset, and also partially company culture.

For example does a CSS expert know how to design a great website? _maybe_…but knowing the CSS spec in its entirely doesn’t (by itself) help you understand how to make useful or delightful products.


ChatGPT-3.5 was more of a novelty than a product.

It would be weird to release that as a serious company. They tried making a deliberately-wacky chatbot but it was not fun.

Letting OpenAI to release it first was a right move.


To me, I want openai to release the Chatgpt 3 and chatgpt 3.5 as the phenomenal leap of intelligence and even I appreciated the Chatgpt 3 a lot, more so than even now like It had its quirks but it was such a good model man.

I remember forming a really simple dead simple sveltekit website during Chatgpt 3. It was good, it was mind blowing and I was proud of it.

The only interactivity was a button which would go from one color to other and it would then lead to a pdf.

If I am going to be honest, the UI was genuinely good. It was great tho and still gives me more nostalgia and good vibes than current models. Em-dashes weren't that common in Chatgpt 3 iirc but I have genuinely forgotten what it was like to talk to it


> Android integration potential

Nearly all the people that matter use iPhone... Yet Apple really hasn't had much success in the AI world, despite being in a position to win if their product is even only vaguely passable.


"Flumi, the wayfinder of Gurted, is created in Godot - the game engine." gives strong Microservices by Krazam vibes

https://www.youtube.com/watch?v=y8OnoxKotPQ


Really surprised about the closing of Starbucks Reserve in Seattle. That place was always bursting at the seams and must have had a major halo effect for the brand. It's hard not to associate the closure with its recent unionization.


I absolutely believe that the Reserve location in Seattle was busy and can second that those locations had a halo effect for the larger brand.

I've only been to the Reserve location in Midtown Manhattan once and it was very different experience than your run-of-the-mill location. Specifically, I had a drink replaced without asking because the barista said it had "died" while I was in the (nice, clean, great smelling) restroom. Overall, it was just a nice, pleasant experience and I definitely would have frequented that location if I was working in that area regularly. I wonder if this shop was a union one? That might explain why everyone was so pleasant and why it seems to have been closed.


No way! I've been there and had the best tasting coffee and pizza of my life. Surprisingly good combo. And it was right next to the old Living Computer Museum. It was for sure the highlight of my trips to Seattle. :(


The museum is also gone for good. Thank Paul Allen's estate for not wanting to keep it going or find an organization to take it over.


Fortunately we have ample local alternatives in Seattle to pick from


Yes and no. Having your own electricity production shields you somewhat from rising energy prices. That added predictability is worth something.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: