Hacker Newsnew | past | comments | ask | show | jobs | submit | more monsieurbanana's commentslogin

Regardless of what you can't tell, he's absolutely right regarding Apple's claims: saying that a 8gb mac is as good as a 16gb non-mac is laughable.


My entry-level 8GB M1 Macbook Air beats my 64GB 10-core Intel iMac in my day-to-day dev work.


That was never said. They said 8gb mac is similar to a 16gb non-Mac


> General OS slowness and instability and clunky design not so much.

I can't tell whether you're talking about Windows or Mac here


My WindowServer is sitting at 30% CPU and 15% GPU on an M1 MacBook Pro and I have no idea why. I reboot the Mac more often than I rebooted my dogshit HP enterprise laptop.

MacOS is just a bad OS. It's good if you only ever use the MacBook as a laptop, but the moment you hook it up to a useful setup, it becomes a drag.


Did you ever consider there might be something wrong with your machine or some piece of software you’ve installed?

Or do you think the rest of the world that has heaped praise on the Apple Silicon Macs has missed something you’ve seen?


It's a Mac. Of course there's something wrong with it. It's riddled with bugs Apple can't be bothered to fix.

If it isn't WindowServer losing its mind, it's some other buggy system daemon. Apple simply does not do stable versions of macOS any more.


Most of the Apple Silicon praise is coming from delusional fanboys. It's only good for one thing and it is power efficiency.

For the majority of people that are really into computers, that means jack shit and it is basically just some nice thing you may want to have if you really have to use a laptop from time to time.

Most Apple laptops (especially the expensive ones) are used by executives and somewhat rich people who want to show off. I was an Apple technician in the past so I know very well how marketing really doesn't match the typical user, then you get irrelevant "praise".

As someone who actually cared about Apple products, I wish they didn't switch their whole computer lineup to Apple Silicon. They could have kept making standard computers at the same time, but it would have been hard justifying such outrageous markup on ram and storage.


I blame this on trying to use non-Apple hardware with the MacBook. I'm sure if I had an Apple Display instead of a g-sync 165Hz-capable monitor and closed the lid when hooked up everything would be... perhaps not fine, but good enough.

The macbook is amazing when considered in isolation. Trying to use it in a workstation setting is definitely 'holding it wrong', which drives me nuts. Wasted potential and definitely undeserving of the 'Pro' moniker.


I have a LG 38WN95C (3480x1600, 144hz, thunderbolt 3) monitor that's plugged into my Macbook Pro (M1 Pro _I think_, I'm not home to doulble check). The laptop is in clamshell and plugged in via Thunderbolt 3. IIRC, I had to turn the refresh rate down to 120hz, but otherwise it works well. The only hiccup is that if I unplug it, use it for a bit, and plug it back in I have to open the lid one time after plugging it in so it can realize it's plugged in to a display.

I have a M3 Pro MacBook Pro for work that I also swap between on the display using the single cable that I remove from one to plug into the other. The laptop travels with me to work where it plugs into a 16:9 4k 60hz monitor while it's open. I have the same clamshell issue as my personal laptop where I have to open it once after plugging it in, but otherwise it works pretty well.

I do however have to do some workspace rearranging when I go from clamshell->ultrawide to my work setup of open lid->16:9 monitor. But that's no a huge deal since I'm going from 1 screen to 2 screens.


I have zero Apple displays out of the three or four I have in the office. They all work amazingly well -- often better than on Windows or Linux.


I use it in a workstation setting. Two workstation settings in fact.

I dock at home into a dual monitor setup.

I go to work and dock into a dual monitor setup.

I keep my MacBook open, not closed.

Pretty much all my coworkers do the same. All with Apple silicon MacBook pros. No idea what you’re talking about.

I believe you — I just think you need to do a deeper dive into your kit instead of assuming the rest of the world must be wrong about these machines.


Huge plus one. I too bounce between environments with my MacBook. Never experienced any of the issues the gp describes, which to me sounds like something wrong with that device. Is it a Corp managed device?


I'm using one in a workstation setting with 1 big curved widescreen monitor and a multitude of keyboards and pointer devices, and it's great. It's fine in the workstation setting, you're holding it particularly wrong somewayhow


Are you using an actual USB-C to DisplayPort adapter or a software-enabled adapter? If you're driving more than one monitor, it might be the software-enabled type, which is NOT actually multiple monitors over DisplayPort but some other weirdness.


Are you running an external display and is it DisplayLink?


Yes, HDMI-USB-C dedicated cable directly into the laptop.


Windows are not bad if you have control over them. But in corp env, you don't and the force updates, AV, FW, extra AV scans and other monitoring shit so even with 32 GB of RAM, fast SSD and latest i7 PC would start to act like it's Win98. That's why I switch to Mac M2, zero corporate shit and total control so it works great.


> That's why I switch to Mac M2, zero corporate shit and total control so it works great.

Unless you work at an org that has all this stuff, like pretty much any org at scale that has compliance or security requirements. My work-issued MacBook has an AV that slows opening everything, forces updates and restarts and all that jazz.


Same. Our corporate-issue MBPs come with managed policies, AV software, and remote control services (remote wipe, lockout). It's why I'm never tempted to use work equipment for personal use.

I guess our org has chosen better management software, though, because it never results in restarts. The one exception is the requirement to keep the OS patched in order to log in to corporate systems.

The window position thing is really annoying though.


About FW and AV scans, doesn't macOS phone home when you execute programs?

I remember some problem with programs hanging and/or slow to start because the phoning home functionality was having issues.

edit: yep, it does https://apple.stackexchange.com/questions/391379/does-macos-...


It does phone home yes, but people pretend it doesn't and only Windows is evil when it does so. At least it doesn't have ads.


Not yet. But rest assured, they are just getting started. I know of one theory that suggests ATT was really just Apple preparing to launch its own Ads network.

https://www.wired.com/story/apple-is-an-ad-company-now/


This is where 'not corporate endorsed but tolerated' wsl2 is awesome, everything inside the vm is not scanned or slowed down by all the security crap.


To be fair, that's only in part because Apple doesn't seem to have the level of device management Windows offers.

Many of the solutions I've seen in the corp environment try to make up for this with some frankly janky solutions. Apple seems to be slowly improving that, but give it time.

But at the end of the day, the crappy performance hits I've seen on both kinds kf devices are generally shitty monitoring solutions and restrictions that neither support natively.


If you even know about what WireGuard is then you're already in a minority. I personally would rather think as little as possible about networking, so I've been eye-ing tailscale.


Not quite banking portals, but I've had banking apps of a South American country that were only available for Play Store accounts of that country. Really annoying, and even with VPN/proxy/Tailscale it's not easily fixable as you can only change the country of a play store account once every 12 months.


Yes, many (smaller) banks only allow play/app store access to the countries they operate in (really annoying for expats!).

But that won't affect you if you're just travelling. Even if you get a new phone, your account is still set to your original country.


Isn't that something you get from the infrastructure surrounding the llm? I thought the "running code" feature didn't need specific support from the llm, besides being able to output conforming json or code when asked to.


The LLM (Claude) currently doesn't know to not hallucinate numbers and instead write code + run it (something ChatGPT used to do but they fixed it)


That's because the Claude web UI doesn't yet have the equivalent of the ChatGPT Code Interpreter tool (though they say they're working on it). That's not about the quality of the Claude 3 Opus model, which is the model which people think compares to or beats GPT-4. It's about the tooling that has been built for ChatGPT.


Code interpreter is pretty neat, because you can tell ChatGPT to write some code and to make sure the code works, and then it'll write you some bad code, realize it's bad, and then iterate on it until it gets to a place that it's happy with. (Maybe I should say passes its test rather than anthropomorphize ChatGPT as being "happy".)


Right, like the other commenter suggested, that's an infrastructure-level thing, not a model-level thing. Given that you're talking about ChatGPT, I assume you aren't accessing GPT-3.5 or GPT-4 directly through the API but using the app or the interface provided at chat.openai.com. The magic that makes the kinds of interactions you're describing possible amounts to a bit of clever prompting sprinkled on top of some rather impressive frontend design and engineering.

Correctly prompted, even Mistral-7B can write and run code in response to questions, and it's a model that can run on laptops from half a decade ago, with two or three orders of magnitude fewer parameters that GPT-4.


> Right, like the other commenter suggested, that's an infrastructure-level thing, not a model-level thing.

By default, the ChatGPT "model" knows to not try to do math and instead write code to do the math then run it. I get that it's set up infrastructure wise to be able to run it, but why is Claude's main chat UI not trying to instead respond

"hey, do this calculation on your own since I can't" or something of this nature instead of responding to math incorrectly


Because ChatGPT ships with a system prompt which instructs the underlying model to do exactly that. A similar web application could be developed for Claude, and it would perform similarly with the right prompt, as it's quite good at tool use.

For example, I'm able to get Claude-3-Opus to write Python and call a Python interpreter of its own accord when questioned about time series data in some of my data analysis workflows, though I haven't glued together a pretty GUI for it yet (e.g., plots are simply saved to disk). While I haven't run into any problems around calculations yet, I'm sure it wouldn't be too hard to further refine the system prompt and ensure that all calculations are performed or checked using Python.


Doesn't seem like you are very informed on how LLMs work, but just so you know, there are many different versions of Claude, just like how ChatGPT can use different versions of GPT.


Developing countries like France, where the word bidet comes from? (although to be fair my generation didn't use them, until recently when the Japanese-style bidet started picking up)

The other 2 countries I know of where bidets are widely common are the ultra-modern Korea and Japan.


It's illegal in italy for homes to not have a bidet.


The amount of people that will run their own models is negligible at the scale of the society. You also can't choose to cut yourself off from the bad outcomes, no matter how much you try. Propaganda 3.0 might not affect you directly, but the political outcomes it produces will.


> The amount of people that will run their own models is negligible at the scale of the society

Let me transpose your sentence back to 1980:

"The amount of people that will run their own computers is negligible at the scale of the society"

(I was 8 in 1980. People literally said this.)

All someone needs to do is to productize a whole-house LLM that does voice to text, runs the LLM, does text back to voice (possibly mimicking whatever voice you want), and you'd have an Alexa replacement that is far smarter. I'd buy it in a heartbeat just so that I wouldn't have to do maintenance on it.


But people don't run their own computers anymore. A phone on their own is nothing without email maps messenger apps browser cloud storage etc... So yes, nobody runs their own computer anymore. It'll be the same for llms. Can they run on commodity hardware? Yes. Will people go to lengths (including buying better hardware) to run gpt-lite instead of the much better internet-aware cloud llm gpt8?


It will become mainstream in a few years when every laptop, cellphone, web browser or operating system will sport local LLMs. For now it is a bit hard, but only a bit.


I get why assertions would be more important than in memory safe languages, but why are unit tests less important in C than in higher-level languages? Or you just mean relative to assertions? (which to me don't really influence each other in that way)


Because the coding style is different. The basic unit of code is the procedure, not abstract entities like DateTimeCalculator or EmployeePayrollManager. Procedures are sensitive to the context where they are called. A procedure calling another procedure usually takes care to provide valid input data. Robust error-checking must be in place, or your program is unsafe.

A unit test wouldn't know the calling context of procedures deep in the call graph, and cannot detect unsafe but seemingly working code.


This is more of a culture surrounding the language than anything else. Modern C avoids global state, prefers directly specified context and even allocation is external to the function's logic.

And testing functions is easy.

This is as opposed to procedures, I.e. functions that operate on global state with fragile conext assumptions, which almost impossible to replicate in test environments.


> The basic unit of code is the procedure, not abstract entities like DateTimeCalculator or EmployeePayrollManager.

I agree, but in almost any non-trivial well-run C project the basic unit is going to be datatypes, opaque pointers, modules, etc.

In that case I generally throw in a `bool MyObjectType_test(void)` into the module so, in some sense, I'll have some basic sort of testing. Not as nice as the OO languages provide, but enough to give myself some confidence that changes don't introduce blatant bugs.


I work mostly with functional programming languages where the function is the basic unit of code, what you say is true but is also true for "abstract entities". Not all classes need robust error checking if you control how they are being called.

> The basic unit of code is the procedure

I think ultimately this is where we think differently. If you are okay testing a whole class in an OOP context and calling that a unit test, then a test that calls multiple procedures (because one of them creates the input for the main procedure I want to test, for example) can also be considered a unit test.


I also think because it's lower level what functions are built on tend to be long term stable. Function was tested at one point and then used in production for a couple years. It works and will continue to work as long as no one messes with it.


With the new React 19 that changes everything once again™, I wouldn't be surprised if by React 20 or 21 they completely move away from the vdom.

Getting a compiler step seems like a first step towards that.


Software, websites


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: