Hacker Newsnew | past | comments | ask | show | jobs | submit | bob_theslob646's favoriteslogin

I've always noticed that when I'm giving advice to someone or trying to help out, it always feels their problem is easier than whatever problem I have. As someone with some anxiety around things like calling some company to get something done or asking a random stranger for some help in a store, I would gladly do it if it was to help someone else (family member or friend). But when it's for me I find it harder.

I wonder how much psychologically we can be more confident and less anxious when we're doing something for others vs ourselves..


Android users can enter *#*#INFO#*#* on the phone app's numeric keypad to open a diagnostics tool, which includes a submenu that shows signal strength in dBm.

It's handy for locating sweet spots and dead zones in my home.


One weird trick is to tell the LLM to ask you questions about anything that’s unclear at this point. I tell it eg to ask up to 10 questions. Often I do multiple rounds of these Q&A and I‘m always surprised at the quality of the questions (w/ Opus). Getting better results that way, just because it reduces the degrees of freedom in which the agent can go off in a totally wrong direction.

(From that link, about adding new subscriptions)

>The only real trick is that most YouTube channels use a vanity URL and it’s more complicated to get the channel ID in those instances.

Go to the channel's videos page ( https://youtube.com/.../videos ) -> right-click -> View page source -> search for "rssUrl" . It'll look like https://www.youtube.com/feeds/videos.xml?channel_id=UC...

Bonus: Replace the "?channel_id=UC..." with "?playlist_id=UULF..." to get a feed without shorts and livestreams.


>Proton

Using proton as well, but if you're stuck on the free tier you can't use any 3rd party email clients.

>YouTube

Using Google takeout for Youtube will give you a .csv of your subscriptions and playlists (just be sure to un-check getting a download of your videos). From there you can get the rss feeds and use RSSguard as a subscription viewer/media player, this site was a big help in figuring things out https://charlesthomas.dev/blog/converting-my-youtube-subscri....


I forget where I originally saw this, but someone put together a document titled "How to Take all the Math Classes You Need." (https://docs.google.com/document/d/1G-hSdO5Tm9Nc6E4GobZZlwD0...)

While it assumes a level of competence of basic algebra, it essentially mimics a self-study math major and provides links to lecture recordings, widely used textbooks, problem sets, and answer banks to said problem sets. You obviously have to be self-motivated, but it beats paying $50 per month for the service OP's post links to.


I've got a very real world use case I use DistilBERT for - learning how to label wordpress articles. It is one of those things where it's kind of valuable (tagging) but not enough to spend loads on compute for it.

The great thing is I have enough data (100k+) to fine-tune and run a meaningful classification report over. The data is very diverse, and while the labels aren't totally evenly distributed, I can deal with the imbalance with a few tricks.

Can't wait to swap it out for this and see the changes in the scores. Will report back


> the "taste-skill discrepancy." Your taste (your ability to recognize quality) develops faster than your skill (your ability to produce it). This creates what Ira Glass famously called "the gap," but I think of it as the thing that separates creators from consumers.

This resonated quite strongly with me. It puts into words something that I've been feeling when working with AI. If you're new to something and using AI for it, it automatically boosts the floor of your taste, but not your skill. And you end up never slowing down to make mistakes and learn, because you can just do it without friction.


I wrote a bit about this the other day: https://simonwillison.net/2025/Jun/27/context-engineering/

Drew Breunig has been doing some fantastic writing on this subject - coincidentally at the same time as the "context engineering" buzzword appeared but actually unrelated to that meme.

How Long Contexts Fail - https://www.dbreunig.com/2025/06/22/how-contexts-fail-and-ho... - talks about the various ways in which longer contexts can start causing problems (also known as "context rot")

How to Fix Your Context - https://www.dbreunig.com/2025/06/26/how-to-fix-your-context.... - gives names to a bunch of techniques for working around these problems including Tool Loadout, Context Quarantine, Context Pruning, Context Summarization, and Context Offloading.


I built a Chrome extension with one feature that transcribes audio to text in the browser using huggingface/transformers.js running the OpenAI Whisper model with WebGPU. It works perfect! Here is a list of examples of all the things you can do in the browser with webgpu for free. [0]

The last thing in the world I want to do is listen or watch presidential social media posts, but, on the other hand, sometimes enormously stupid things are said which move the SP500 up or down $60 in a session. So this feature queries for new posts every minute, does ORC image to text and transcribe video audio to text locally, sends the post with text for analysis, all in the background inside a Chrome extension before notify me of anything economically significant.

[0] https://github.com/huggingface/transformers.js/tree/main/exa...

[1] https://github.com/adam-s/doomberg-terminal


I built a tool for this a while back: https://github.com/dogsheep/apple-notes-to-sqlite

I just tried it and it still works:

  uvx apple-notes-to-sqlite /tmp/notes.db
  # in another terminal while that's running
  uvx datasette /tmp/notes.db

I did my PhD in Atomic, Molecular, and Optical (AMO) physics, and despite "optical" being part of that I realized midway that I didn't know enough about how regular cameras worked!

It didn't take very long to learn, and it turned out to be extremely important in the work I did during the early days at Waymo and later at Motional.

I wanted to pass along this fun video from several years ago that discusses HDR: https://www.youtube.com/watch?v=bkQJdaGGVM8 . It's short and fun, I recommend it to all HN readers.

Separately, if you want a more serious introduction to digital photography, I recommend the lectures by Marc Levoy from his Stanford course: https://www.youtube.com/watch?v=y7HrM-fk_Rc&list=PL8ungNrvUY... . I believe he runs his own group at Adobe now after leading a successful effort at Google making their pixel cameras the best in the industry for a couple of years. (And then everyone more-or-less caught up, just like with most tech improvements in the history of smartphones).


That's so unfortunate--I've also used Pocket for a decade+. I had the Omnivore app installed on my phone as a replacement for the other infinite feed scrolling apps.

I'm actually working on an open-source alternative at https://curi.ooo if you're interested in checking it out. It's a work in progress, but I'm building it primarily for my own use because I'm frustrated with all these services shutting down.

The Kobo integration you have is interesting too, wonder how I could support that use case...


the short answer: use a semantic layer.

It's the cleanest way to give the right context and the best place to pull a human in the loop.

A human can validate and create all important metrics (e.g. what does "monthly active users" really mean) then an LLM can use that metric definition whenever asked for MAU.

With a semantic layer, you get the added benefit of writing queries in JSON instead of raw SQL. LLM's are much more consistent at writing a small JSON vs. hundreds of lines of SQL.

We[0] use cube[1] for this. It's the best open source semantic layer, but there's a couple closed source options too.

My last company wrote a post on this in 2021[2]. Looks like the acquirer stopped paying for the blog hosting, but the HN post is still up.

0 - https://www.definite.app/

1 - https://cube.dev/

2 - https://news.ycombinator.com/item?id=25930190


Having lived in NYC, Hong Kong and Singapore, the best system around all of this is ..

Singapore hawker center.

Turn up to somewhere ~10 mins or less from your location. Have a great meal for $5 (US) or less.

Continue

No delivery fees, delivery emissions or waste. Talk to people... The list goes on.


FYI: YouTube provides RSS feed for every channel. The URL is as follows:

    https://www.youtube.com/feeds/videos.xml?channel_id=CHANNEL_ID
And without downloading with yt-dlp, videos can be watched from youtube-nocookie.com in full-window mode (no distractions) under:

    https://cinemaphile.com/watch?v=VIDEO_ID

There was Joost in 2008, from Skype founders. Skype was originally P2P until Microsoft acquisition and killing this legally questionable feature - need to feed the big brother (: Joost raised ~$50M.

I remember it as it was one of rare apps built in XUL, the same framework as Mozilla apps (Firefox).

https://en.m.wikipedia.org/wiki/Joost


There is a "Daily Urine Splash Estimation in the US" section in their paper (I can't believe I looked this up). There equation basically makes these assumptions:

1. 56 million non-residential urinals in the US.

2. average of .22 L per "void" (a void is one pee session)

3. I think how they estimated average usage per urinal was weird and frankly wrong - they estimated each person would have between 3 and 6 "voids" per day, and each urinal would be used by between 1 and 2 people per day. Anyway, in any case that leads to an estimate of each urinal being used between 3 and 12 times per day. I think this estimate is way, way too high, because at the low end 3 X 56 million = 168 million, so on the low end they are estimating that, on average, every male in the US makes at least one public urinal pee (and, on the high end, 4 public urinal pees!)

4. Based on their data they calculate a value of ~1% (0.965%) of pee gets splashed onto the floor.

So you multiply that all together: 56 million * .22 * (3 on the low end, 12 on the high end) * .965% = about 350,000L on the low end, or 1,400,000L on the high end, so they said "on the order of a million liters".

Again, I can't believe I spent time looking this up and writing this comment.


I use Piper for one of my apps. It runs on CPU and doesn't require a GPU. It will run well on a raspberry pi. I found a couple of permissively licensed voices that could handle technical terms without garbling them.

However, it is unmaintained and the Apple Silicon build is broken.

My app also uses whisper.cpp. It runs in real time on Apple Sillicon or on modern fast CPUs like AMD's gaming CPUs.


Anyone who wants to demystify ML should read: The StatQuest Illustrated Guide to Machine Learning [0] By Josh Starmer. To this day I haven't found a teacher who could express complex ideas as clearly and concisely as Starmer does. It's written in an almost children's book like format that is very easy to read and understand. He also just published a book on NN that is just as good. Highly recommend even if you are already an expert as it will give you great ways to teach and communicate complex ideas in ML.

[0]: https://www.goodreads.com/book/show/75622146-the-statquest-i...


I want to highlight this blog article which in my opinion gives more actionable advice than all of the books listed (I read 6/8 of those) on the topic of having good social life

https://www.neelnanda.io/blog/mini-blog-post-23-taking-socia...

tldr; taking social initiative is like a cheat code.

The easiest way to make friends is to start organizing. I started applying this approach to my social life and I created two new friends groups from scratch.

1. Me and my brother started organizing lasertag matches. Basically anyone is invited. We even post open invitation to all our friends. Right now we have 20 people in our group chat and we do lasertags + beers once a month.

2. I created a group of friends from highschool - like a reunion shit but once every month. When I reached out to people everyone was hyped. I had only one of these meetings atm but everyone was happy and said they want more. We will be having a second one soon.

If I needed more friends that's what I would be focusing on. Organizing a cyclic get together of people. It's like a cheat code for having a lot of friends.

Right now I am a bit of time constrained but I look forward to organizing something different in the future.


There is a movement called Cohousing that I don’t think gets enough attention.

I prefer to call it Tiny Neighborhoods, which is basically what the article describes.

If interested, I did a 3,000 word deep dive into the specifics of how tiny neighborhoods have worked over the past 50 years…

https://iambateman.com/tiny


Free reminder that the USGS is involved in an epic, nearly decade-long collection of mid- and high- density lidar of the entire continental USA, and the QC'd data (point cloud & derived) is published gratis for everyone to use:

https://www.usgs.gov/3d-elevation-program

https://www.usgs.gov/media/images/3d-elevation-program-fy25-...

https://www.usgs.gov/3d-elevation-program/3dep-spatial-metad...


I worked at OkCupid from 2013-2017 and totally resonate with the author that mid-2010s OkCupid was a really special product, and that it took a steep decline as the decade went on. It's not entirely fair to say that the Match acquisition immediately caused that decline; I started a couple years after Match got the company in its hands, and only two of the original founders were still focused on OkCupid full time. But the product continued to improve and grow for years after that. There was very little top-down directives about how to develop the product during that time.

OkCupid had excellent growth in the first half of the 2010s, but as that growth started to plateau, it was pretty clear that the focus moved to following Tinder's trends in an effort to match their level of growth. But OkCupid was a really healthy company with great profits and low burn, being only a team of 30-40 people. It could have stayed the way it was and continued to turn a profit. But Tinder had shown that the market size for mobile was way bigger than the desktop-focused product that OkCupid used to be. The focus towards acquiring more mobile users meant stripping down and simplifying a product that previously demanded hundreds of words of essay writing, and answering hundreds of questions. The essay prompts became simpler, multiple choice asymmetric questions got deprioritized over reciprocal yes / no questions. And as a user, I felt the quality of conversations I had went down as most messages were sent on the go from people just trying to line up their weekend plans, instead of a deeply invested audience trying to form meaningful connections first.

I really miss working on the product OkCupid was when I started, and often day-dream about starting another dating app closer to its original long-form vision. But the worst part of trying to do that is bootstrapping users, and seems like the only ways to do that are either have a lot of capital, or shadier methods like fake profiles or scraping data off of other sites. Not really interested in raising or setting my morals aside to do it.


I've been working on a introductory STATS book for the past couple of years and I totally understand where the OP is coming from. There are so many books out there that focus on technique (the HOW), but don't explain the reasoning (the WHY).

I guess it wouldn't be a problem if the techniques being taught in STATS101 were actually usable in the real world. A bit like driving a car: you don't need to know how internal combustion engines work, you just need to press the pedals (and not endanger others on the road). The problem is z-tests, t-tests, ANOVA, have very limited use cases. Most real-world data analysis will require more advanced models, so the STATS education is doubly-problematic: does not teach you useful skills OR teach you general principles.

I spent a lot of time researching and thinking about STATS curriculum and choosing which topics are actually worth covering. I wrote a blog post about this[1]. In the end I settled on a computation-heavy approach, which allows me to do lots of hands simulations and demonstrations of concepts, something that will be helpful for tech-literate readers, but I think also for the non-tech people, since it will be easier to learn Python+STATS than to try to learn STATS alone. Here is a detailed argument about how Python is useful for learning statistics[2].

If you're interested in seeing the book outline, you can check this google doc[3]. Comments welcome. I'm currently writing the last chapter, so hopefully will be done with it by January. I have a mailing list[4] for ppl who want to be notified when the book is ready.

[1] https://minireference.com/blog/fixing-the-statistics-curricu...

[2] https://minireference.com/blog/python-for-stats/

[3] https://docs.google.com/document/d/1fwep23-95U-w1QMPU31nOvUn...

[4] https://confirmsubscription.com/h/t/A17516BF2FCB41B2


Starterpacks are great, what about seeing what is current top engaging accounts ? https://www.graphtracks.com

I would like a tool that converts x months of credit card bills into a csv (the txn table from across PDFs and pages in each PDF) or something very easily.

I wrote about my approach here: https://sjer.red/blog/2023/screen-time/

Life pro tip (I believe that everyone should do this):

- Buy a domain and set up a custom email that represents you like firstname@firstlast.com - you own this domain and email address and no company, with the exception of your registrar maybe, has any control over it or authority to take it from you.

- Set up a dummy gmail/proton/whatever acct with a random address - this address will never be used or exposed publicly but it will represent your online email hosting acct.

- Forward your custom email address to the email provider address and configure the web client to send from your custom address.

- set your provider email account up in a local client like outlook that allows you to create a local backup.

- continue watching your previous account and updating your accounts to your new lifetime address. At some point, you should be getting minimal emails to the old account, then you can forward it to your new one.

The idea here is that you've decoupled your identity (your email address) from your webmail provider (gmail)

So google inexplicably cuts your access. Now what?

No problem. You have a local backup of all your emails in outlook. You repeat the process with a different service like proton (or a new gmail acct) with a new dummy email. Then you set the new acct up in outlook and drag all of the emails from your old acct in to the new one you haven't missed a beat. You're still sending and receiving emails to/from the same address and you can access all your historical emails in the new hosting acct because you migrated/synced them all over locally in outlook.

Losing access to your email identity is arguably one of the most catastrophic scenarios you can think of in terms of being online. This guards against that about as much as possible. It doesn't cover other services like voice and stuff but you can follow similar strategies for things like documents and files.


Very interesting experiment. I've been working on a handwriting application [0] for the past couple of years and incorporating the ability to take a picture to convert it into digital ink would be really nice.

[0] https://scrivanolabs.github.io


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: