The 1M token context was Gemini's headlining feature. Now, the only thing I'd like Claude to work on is tokens counted towards document processing. Gemini will often bill 1/10th the tokens Anthropic does for the same document.
Agree but pricing wise, Gemini 2.5 pro wins. Gemini input tokens are half the cost of Claude 4. Output is $5/million cheaper than Claude. But, document processing is significantly cheaper. A 5MB PDF (customer invoice) with Gemini is like 5k tokens vs 56k with Claude.
The only downside with Gemini (and it's a big one) is availability. We get rate limited by their dynamic QoS all the time even if we haven't reached our quota. Our GCP sales rep keeps recommending "provisioned throughput," but it's both expensive, and doesn't fit our workload type. Plus, the VertexAI SDK is kind of a PITA compared to Anthropic.
Is there a crowd-sourced sentiment score for models? I know all these scores are juiced like crazy. I stopped taking them at face value months ago. What I want to know is if other folks out there actually use them or if they are unreliable.
Besides the LM Arena Leaderboard mentioned by a sibling comment, if go to the r/LocalLlama/ subreddit, you can very unscientifically get a rough sentiment of the performance of the models by reading the comments (and maybe even check the upvotes). I think the crowd's knee-jerk reaction is unreliable though, but that's what you asked for.
Not anymore tho. It used to be the place to vibe-check a model ~1 year ago, but lately it's filled with toxic my team vs. your team, memes about CEOs (wtf) and general poor takes on a lot of things.
For a while it was china vs. world, but lately it's even more divided, with heavy camping on specific models. You can still get some signal, but you have to either ban a lot of accounts, or read new during different tzs so you can get some of that "i'm just here for the tech stack" vibe from posters.
I don't really go there much anymore but, when I was, there seemed to be an innordinate amount of Chinese nationalism from young accounts speaking odd English.
Since the ranking is based on token usage, wouldn't this ranking be skewed by the fact that small models' APIs are often used for consumer products, especially free ones? Meanwhile reasoning models skew it in the opposite direction, but to what extent I don't know.
It's an interesting proxy, but idk how reliable it'd be.
Thanks for your comment! I have a few PDFs that I need to generate for groups of users every so often and since wkhtmltopdf is considered EOL, I've been forced to use chrome (which sucks to manage). I just rewrote that code to use Typst (via the typst gem) and it's so so so much better.
Looks great! I'm definitely in the market for something like this; and building on top of helm charts makes me want to try it out.
Can Canine automatically upgrade my helm charts? That would be killer. I usually stay on cloud-hosted paid plans because remembering to upgrade is not fun. The next reason is that I often need to recall the ops knowledge just after I've forgotten it.
It can apply upgrades but I don't think it solves your core problem, which is how to perform upgrades safely. Most of the time its totally fine, but sometimes a config key changes across versions.
Upgrading helm charts without some manual monitoring seems like it might still be an unsolved problem :(
Congrats to the teams! Like others have said, your pricing ends up killing adoption for my company. We ended up self-hosting Airbyte. It ain't perfect but at least we're not paying $10/GB to replicate data within our own VPC.
I'm guessing any useful use of AI has already been adopted by some volunteers. Wikipedia might be able to build tools around the parts that work well, but the volunteers will end up footing the bill for the AI spend. Wikipedia will probably pivot to building an AI research product which they can sell to universities/ b2c.
> Wikipedia will probably pivot to building an AI research product which they can sell to universities/ b2c.
Why would they do this? All of wikipedia is publicly available for any use. They literally do not have a competitive advantage (and don't seem interested in it, either).
Exactly. But using AI to summarize articles, stitch them together, etc. under the Wikipedia brand as a product is something they could easily sell. I can totally see a university buying WikiResearch™ for every student.
Same. Shorts are actually a great product in terms of capturing attention, but I don't want them on youtube. I hear someone from the back shouting, "you're not the customer, you're the product!" but I pay for youtube premium... that makes me the customer; and I pay for the long-form content without ads! But 50% of Youtube shorts are just ads or product marketing. I never feel good after going on a youtube shorts binge. Please, youtube, let me turn it off.
You're still the product. Paying to remove ads doesn't change this. You're still being tracked. Unless something has changed recently, you're still being recommended videos.
No, I think Youtube really is the product. With Premium, you don't see any ads (at least the ones Youtube makes money from), and there's no way "tracking" makes them anywhere near as much money as a simple premium subscription.
I have seen an article somewhere they are not even good for marketing.
The do grab your attention, but they have no lasting effect, it is so short and there is so much of it that you quickly forget everything you have watched, including the ads.
They are good for the platforms though, because effective or not, they get paid good money for these ads.
I think AWS will need to update their documentation to communicate this. Will a snapshot isolation fix introduce a performance regression in latency or throughput? Or, maybe they stand by what they have as being strong enough. Either way, they'll need to say something.
I agree, but I have a feeling this isn't a small fix. Sounds like someone picked a mechanism that seemed to be equivalent but is not. Swapping that will require a lot of time and testing.
there is no trivial fix for this without breaking performance.
roughly, there is no free lunch in distributed systems, and AWS made a tradeoff to relax consistency guarantees for that specific setup, and didn't really advertise that
It looks like a bug, but the problem is the documentation does not detail what guarantees are offered in this scenario, but would love if somebody could point me where it does...