Hacker Newsnew | past | comments | ask | show | jobs | submit | thornton's commentslogin

I was imagining to click a link to an indie hacker’s blog about a story outlining how it’s beneficial to “fish in the wrong place” to solve a problem or something


I thought it was going to be about the shell


This is one of those times when I want someone to explain the value to me. Like is this to help coding agents be more efficient?

Forgive my ignorance!


I believe that's mostly for fun. Coding agents wouldn't need to interact via the same interfaces humans use, they'd be given a tool to read and write files and they'd be fine with that.


Unfortunately, this characterizes the entire project: "cool" examples with no practical utility. Meanwhile, the language itself is incredibly strange (functions via patterns are an example of strange language choice), extremely slow, and very unstable.

In short, it's developing in the wrong direction.

I switched from Mathematica to Matlab in my work; it was the best investment of time in the entire project


This function is user contributed. It's not official.

They're literally using diff/patch under the covers, at least the setup i'm currently using.


Did you get them working with diff syntax? I couldn't figure it out, so I just tried a bunch of agentic programs, found a few that actually worked, and it turned out they all use search/replace strings. There's probably other ways to do it but it seems basically everyone settled on that.

I've been trying that with smaller models and had to make some adjustments (e.g. they all really wanted to include the filename twice). So I just make a small tweak and bam suddenly I can edit code with small fast cheap models.


The thought of forcing the AI to use vim gave me a nice chuckle. Thank you sir.


I found chatGPT to be bad at VimGolfing.

``` Here is a 35 keystrokes solution that beat your 36 keystrokes solution ! <89 keystrokes> ```

And then it keeps looping in the same way if you ask it about the seahorse emoji (or sometimes just lie about the keystrokes number).

In fact that's not surprising, what is rather surprising is that some of the solutions actually work (>= 100 keystrokes)


They should probably train LLMs to be bad at vim golf. The whole point of vim’s funky language is that human keypresses are very valuable and should not be wasted. Saving keystrokes for an LLM is a non-goal at best.


I guess it's to win at Vim Golf, i.e. how does one get more efficient.


We’ve done similar work. Use case was identifying pages in an old website that now 404 and where they should be redirected to.

Basically doc2vec and cosine similarity. Totally nonsensical matching outputs to the point matching on title tag vectors or precis was better so now I’m curious if we just did something wrong…


If by 'doc2vec' you mean the word2vec-like 'Paragraph Vectors' technique: even though that's a far simpler approach than the transformer embeddings, it usually works pretty well for coarse document similarity. Even the famous word2vec vector-addition operations kinda worked, as illustrated by some examples in the followup 'Paragraph Vector' paper in 2015: https://arxiv.org/abs/1507.07998

So if for you the resulting doc-to-doc similarities seemed nonsensical, there was likely some process error in model training or application.


Weird unethical employer hack aside.. No one that works with/for me has the interest in an adjacent position with a more promising career trajectory.. so how common is this?


Isn’t moving from an ic to management exactly that? I know an ex-engineer well who switched to management because they wanted to be a VP (their words).


Agreed!


that AI is trained on the old way of doing things. So AI can continue coding or can continue generating UIs that are all You know, Predictions based off of what the past was like. But then we’re at this weird inflection point too where you can’t really have just more of the same be the answer. Everyone kind of agrees that chat which is being used for pretty much everything right now, is nonoptimal user experience for most use cases. And yet that’s what we’re doing.

We don’t really know any better. Even agents that will take 15 minutes and then come back to you they’ll summarize a bunch of stuff along the way. That’s considered, like, good UX practices. That’s the best practice right now. Using using a small model to summarize a thinking models reasoning, as you go so that the user knows that while it’s waiting, things are actually happening.

So I think If anything, whatever is next becomes something new. And therefore it’s gonna be hard for AI in its current form, LLM driven m to solve for it. Without us doing some of that human computer interaction design thinking, for a long while.


The main problem with Google anti competition is how chrome is being leveraged to mine user behavior signals for search engine algos. That is the unfair advantage that needs to be killed off


That’s 20.83% I don’t think it’s that far off.

I just opened screen time in my iPhone, checked devices for phone, selected weekly tab, and flipped back last few weeks to get average of 42 hours per week, with 168 hours in a week puts me at 25% for December.

I’m apparently above average!


These systems will collapse over time because the incentives are being removed for them to exist. So you won’t be able to point to your answers in quora or whatever but they’ll live in the training records and data and in some shape in neural nets being monetized.

I’m not like anti what’s happening or for it, it’s just, that social credit depends on those institutions surviving.


Google is doing this because political content is harder to fact check and they do have initiatives to fight misinformation online. Autocomplete/suggest results are an aggregation of related searches so people could search something false and it shows on autocomplete making a user think a statement is true. Essentially autocomplete can be hijacked generally. Also auto suggest was originally an effort to reduce the number of unique searches which are generally quite high, something like 30%! By funneling people to certain results pages they could increase bids on ads for those results and have an easier time with QC by not having to account for as many completely unique searches. By leaving autocomplete off they actually generate more search intent data around those types of queries which they can do whatever they want with. I’m sure Google can predict a close election better with auto suggest off on political search intent. While I don’t love anything G does, I do think in this case it is a good practice to turn off autocomplete for certain types of news results. If Bing and DDG are showing autocomplete I hope it’s being monitored or those suggestions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: