Hacker Newsnew | past | comments | ask | show | jobs | submit | Ycros's commentslogin

there's enough support for it across various things that it's not going anywhere

They said the same about ISO-8859-* encodings, Webdings/Windings fonts under Windows. Gone. Forever.

Wingdings is available in OTF format to put on your web site as a webfont: https://www.onlinewebfonts.com/fonts/wingdings_OTF

So is Webdings: https://www.dafontfree.io/webdings-font/

Webdings even got integrated into Unicode 7.0, so all the Noto fonts support it: https://en.wikipedia.org/wiki/Webdings

And recode(1) has full support for ISO-8859-*. As does iconv and the Python3 encodings.codecs module. I'm pretty sure browsers can render pages in them, too. Firefox keeps rendering UTF-8 pages as if they were ISO-8859-1 encoded when I screw up at setting the charset parameter on their content-type.


>Webdings even got integrated into Unicode 7.0,

That's the point. Think again.


It seems incompatible with the idea that it's "Gone. Forever." Thinking again doesn't change that for me. The only thing that's gone is the exclusivity to a single proprietary-software vendor.

A simple case. Amigans can still use thanks to standards, Usenet and IRC, they can connect to Bitlbee.org to several choices. With Discord and such it's more difficult, but for Jabber there's no isue at all. Ditto with AmiSSL and Jabber, Gemini clients. They can reuse Amiga 4000 machines (or FPGA based ones) and browse small sites, Gopher, connect to Biltbee and make tons of services usable again.

With Nerdfonts, these will be obsolete in further Unicode releases.

GNU Unifont and the unicode table might be backported to the Amiga. With NerdFonts, you need to do twice the jobs.


I'd recommend anyone looking at these three languages to give Odin a try.


I second that! I was trying Zig for some small projects, but ended up switching to Odin, because I found it much more comfortable!


Is this why I've seen a number of "AUP violation" false positives popping up in claude code recently?


ollama has always had a weird attitude towards upstream, and then they wonder why many in the community don't like them


> they wonder why many in the community don't like them

Do they? They probably care more about their "partners".

As GP said:

  By reimplementing this layer, Ollama gets to enjoy a kind of LTS status that their partners rely on


It uses a video feed and asks you to look in certain directions. At least the one instance I've encountered did.


Yeah. Certainly something AI generated video couldn’t solve.


It shouldn't be to difficult to determine if the camera is pointed at a real face vs a screen showing an AI generated image.


This seems reasonable to me, surely it should be its own repository.


I prefer btop, it does all the usual process monitoring as well as gpus in the latest versions.


Really? Mine is v1.3.2 and doesn't show Intel Iris Xe Graphics!

{UPDATE} I see: no Intel GPU support yet!


Having played around with this sort of thing in the llama.cpp ecosystem when they added it a few weeks ago, I will say that it also helps if your models a) are tuned to output json and b) you prompt them to do so. Anything you can do to help the output fit the grammar helps.


Every time I look at LangChain it seems like unnecessary abstraction. The value in this example are the prompts.


So what are the alternatives to LangChain that the HN crowd uses?

I see two contenders:

https://github.com/minimaxir/simpleaichat/tree/main/simpleai...

https://github.com/griptape-ai/griptape

There is also the llm command line utility that has a very thin underlying library, but which might grow eventually: https://github.com/simonw/llm


Just code it yourself. Most of the core logic can be replaced with a function that that inserts some parameters into a string template and calls an API.


This was the answer for myself as well, pretty cool that we are still at the level where if you have an idea you can build a proof extremely quickly and easily.


I've been enjoying using (and contributing to) Langroid, it's a new multi-agent LLM framework https://github.com/langroid/langroid


I've been actively contributing to Langroid as well. It is easy to use, and the intuitive design allows for the rapid development of LLM applications, streamlining the whole process. Highly recommended for anyone looking into this space!


If you work with JS or TS, check out this alternative that I've been working on:

https://github.com/lgrammel/modelfusion

It lets you stay in full control over the prompts and control flow while make a lot of things easier and more convenient.


LMQL - https://lmql.ai/

Guidance (microsoft) - almost abandoned - https://github.com/microsoft/guidance


How do you know guidance is almost abandoned? Did they announce it?


    import openai
    import os

    openai.api_key = os.environ.get('OPENAI_API_KEY')

    def completion(messages):
        response = openai.ChatCompletion.create(
            model = gpt_model, temperature = 0, messages = messages )
        return response['choices'][0]['message']['content'].strip()

    response = completion([
              {"role": "system", "content": "You are a helpful assistant."},
              {"role": "user", "content": "Who won the world series in 2020?"} ])

    #####

    import json
    import tiktoken
    import os

    tokenizer = tiktoken.get_encoding("cl100k_base")
     
    class Message:
        def __init__(self, role, text, length=None):
            self.role = role
            self.text = text
            if length != None:
                self.length = length
            else:
                self.length = self._count_tokens(text)
            print("New message, token length is",self.length)

        def _count_tokens(self, text):
            tokens = tokenizer.encode(text)
            return len(tokens)

    class History:
        def __init__(self, ID=None):
            self.messages = []
            self.ID = ID

            if self.ID:
                self._load_from_json()

        def add(self, role, text):
            message = Message(role, text)
            self.messages.append(message)
            self._save_to_json()

        def _save_to_json(self):
            if not self.ID:
                return

            data = {
                "messages": [{"role": m.role, "text": m.text, "length": m.length} for m in self.messages]
            }
            self.create_dir_if_not_exists('conversations')
     
            with open(f"conversations/{self.ID}.json", "w") as f:
                json.dump(data, f)

        def create_dir_if_not_exists(self, directory_path):
            if not os.path.exists(directory_path):
                os.makedirs(directory_path)

        def _load_from_json(self):
            try:
                self.create_dir_if_not_exists('conversations')
                with open(f"conversations/{self.ID}.json", "r") as f:
                    data = json.load(f)
                    self.messages = [Message(m["role"], m["text"]) for m in data["messages"]]
            except FileNotFoundError:
                pass

        def recent_messages(self, max_tokens):
            recent_messages_reversed = []
            total_tokens = 0

            for m in reversed(self.messages):
                if total_tokens + m.length <= max_tokens:
                    recent_messages_reversed.append({
                        "role": m.role,
                        "content": m.text
                    })
                    total_tokens += m.length
                else:
                    break

            recent_messages = recent_messages_reversed[::-1]

            return recent_messages


In your loop:

            for m in reversed(self.messages):
                if total_tokens + m.length <= max_tokens:
                    recent_messages_reversed.append({
                        "role": m.role,
                        "content": m.text
                    })
                    total_tokens += m.length
                else:
                    break
It would be important to change that to not drop system prompts, ever. Otherwise a user can defeat the system prompt simply by providing enough user messages.


Good point. The way I use it though is to always add the system prompt to the front after calling that function.


This feels like it should be a charity or a non-profit entity.


There's already a lot of foundations that haven't been able to scale the process of paying thousands of developers due to tax & employment issues across the globe. We think creating the right commercial incentives would have a better chance but we might also be wrong. Time will tell...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: