DRF falls apart very quickly when you get away from the "simple CRUD" use cases.
The Serializer mechanism causes N+1 query behavior by default with non-trivial models. This is exacerbated by the fact that they all run in sequence because there's no async support.
It's documentation is also pretty poor, IMO.
> There’s a vibe, or assumption around Ninja that it’s “newer”, or “like FastAPI”, or “the way forward”, none of which are objective benefits per se.
The creator of DRF also created starlette, which underpins FastAPI. If DRF works for you cool, but that doesn't mean the community hasn't moved on.
OpenWrt is commonly used to breathe new (or better) life into consumer routers, but you can also run it on any x86 machine. I run it on a small, passively-cooled i5 machine* virtualized in Proxmox alongside Home Assistant. I originally ran OPNSense, then VyOS, but eventually went back to OpenWRT since it does everything I want/need with a simple interface.
Honestly, WSL has got nothing over a VirtualBox VM using a virtio-net network device on it.
- Real VM you can snapshot and transfer to another computer running VirtualBox, even on another OS, as VBox supports other operating systems.
- With VcXsrv or Xorg on Cygwin, you'll get graphical apps just the same, and with the virtio-net network card on the VM (instead of Virtualbox's default Intel Pro1000 - my protip :D) you'll get enough speed to watch youtube on google chrome -- e.g.: More than sufficient speed for using an IDE like PyCharm etc
- With VSCode, you don't even need a X server. Just use "Remote - SSH" and connect to your VM and keep your local lower input lag - Works just the same as "Remote - WSL" as both essentially use the same mechanism - a remote connection to a Linux server.
- Services run automatically, as it's just a normal Linux install. No need to 'service docker start' on every WSL run
- Doesn't hijacks your Virtualization extensions, like with WSL, which need Hyper-V which actually boots your computer with a Hyper-V hypervisor and blocks nested virtualization, meaning, you can't use other virtualization technologies properly outside of Hyper-V if you use WSL*
* Btw, you also get a (small, but non-zero) performance hit on your Windows box by using WSL, even when it is not running, since now Hyper-V is what boots your system as the ring-0 guy. Yes, your Windows install basically becomes a "VM" hosted by Hyper-V.
Unpopular opinion: WSL is okay for getting work done quick, but there are better solutions with better tradeoffs in how you handle control of your PC's hardware if you do a good VirtualBox set up imo.
first root your phone ( its easy don't worry ) - superuser, Juice Defender, setCPU, chrome to phone, google goggles, google voice, Unified Remote ( for media center pc), Link2SD, Wireless Tether, Titanium Backup, Quick Settings, Power Control Widget, Audio Galaxy ( plays your music off of network drives), Qik ( live video ), Twidoryd, Dropbox, Springpad ( along with chrome extension ) , google docs, and of course google navigation and gmail.
One of my favorite tricks is sharing a photo to it, then running a quick `python -m http.server 8000` while on wifi. It makes it dirt simple to send a photo from my phone to any local machine on my network without the hassle of cloud services (including generation loss from recompression), incompatible apps, bluetooth pairing, etc.
GPT-4 can already write a lot of useful software based on a prompt. There are multiple startups doing it.
It definitely has significant limitations at this stage as far as the size of the source code and knowing about up-to-date APIs. But the core capability to create software from a prompt is well documented. It's a matter of degree of capability at this point, not 16 versions away.
I built multiple versions of that already. Previous version: https://aidev.codes/ Limited scope and somewhat unreliable if you are hoping to one-shot something without iterating, but still able to produce basic live web applications from a few prompts a fair amount of the time.
But I didn't have a marketing budget and people didn't want to pay for the VMs. So I am working on a version 3 that has more focused scope in terms of the types of web applications and also has templates built in so they look better by default.
But this type of thing is underway, it's not some science fiction concept at this point.
It could also be the sheer concentration of CO2. I work in a forest: 400ppm in the office every morning. Back at home, in the city, 5km away: 580ppm minimum all day long. Maybe cities make people stupid :D
There's a lot of different things that happen to the body at altitude.
For example: The kidneys take over shunting some wastes out that your lungs can't breathe out due to the thinning atmosphere. This means you urinate more which drags electrolytes with it and likely other things.
I couldn't begin to guess what might be going on with you based on what little you have said.
Basically you call `.raw("...")` from some model's queryset, but there's no requirement you actually query that model's table at all.
class SomeModel:
name = models.CharField()
class OtherModel:
foobar = models.CharField()
SomeModel.objects.raw("select foobar as name from thisapp_othermodel")
will yield `SomeModel`s with `name`s that are actual `OtherModel` `foobar` values.
> ChatGPT to code in array programming or logic languages results in code which is highly non-idiomatic for those paradigms. Why is that?
Reason #1 is that those languages are unreadable line noise to humans too. Fundamentally, almost all of the code written in array languages is made purposefully obtuse. Single-letter identifiers, no or little comments, dense code with minimal structure, etc...
Reason #2 is that there are very few examples of these languages on the web, and even more importantly: vanishingly few examples with inline comments and/or explanations. This isn't just because they're rare -- see reason #1 above.
Reason #3 is that LLMs can only write left-to-right. They can't edit or backtrack. Array-based languages are designed to be iterated on, rapidly modified, and even "code golfed" to a high degree.[1]
I've noticed that LLMs struggle with things my coworkers also struggle with: the "line noise" languages like grep, sed, and awk. Like humans, LLMs do well with verbose languages like SQL.
PS: I just tested GPT 4 to see if it can parse a short piece of K code that came up in a thread[2] on HN and it failed pretty miserably. It came close, but on each run it came up with different explanations of what the code does, and none of them matched the explanations in that thread. Conversely, it had no problems with the Rust code. And, err... it found a bug in one of my Rust snippets. Outsmarted by an AI!
[1] You can have an LLM generate code, and then ask it to make it shorter and more idiomatic. Just like a human touching up hastily written messy code, the LLM can fix its own mistakes!
Coding with LLMs was easier after understanding their limitations.
1. They don't know what they are saying until they have said it.
2. Your inputs and its outputs help make the next message.
3. LLMs are not suited for information retrieval like databases and search engines.
LLMs excel at reasoning and predicting subsequent text based on given context. Their strength lies in their ability to generate relevant and cohesive responses.
To optimize results, outline clear rules, strategies, or ideas for the LLM to follow. This helps the model craft, revise, or build upon the established context.
Starting with a precise query and introducing rules or constraints incrementally can help steer the model's output in the desired direction.
Avoid zero-shot queries as these can lead to the model generating unexpected or unrelated responses.
Be cautious while seeking pre-calculated or non-derived answers. Some instruction-tuned models might output incorrect solutions, as they are trained to respond to certain queries without proper context or information.
also, this is my biggest gripe no fault of ours of course: don't seek pre-calculated or non-derived answers. I've seen some of the demonstration data that people are using to train instruction-tuned models and are being taught to respond by making up answers to solutions it shouldn't try to compute. Btw, the output is wrong.
{
"instruction": "What would be the output of the following JavaScript snippet?",
"input": "let area = 6 * 5;\nlet radius = area / 3.14;",
"output": "The output of the JavaScript snippet is the radius, which is 1.91."
},
But their app is not user-repairable, it is closed source :)
Okay, I don't want to be seen as a FairPhone hater. Actually I'm quite impressed and quite happy that they turned out to be more than an over-hyped vaporware prototype, and they actually deliver.
The Serializer mechanism causes N+1 query behavior by default with non-trivial models. This is exacerbated by the fact that they all run in sequence because there's no async support.
It's documentation is also pretty poor, IMO.
> There’s a vibe, or assumption around Ninja that it’s “newer”, or “like FastAPI”, or “the way forward”, none of which are objective benefits per se.
The creator of DRF also created starlette, which underpins FastAPI. If DRF works for you cool, but that doesn't mean the community hasn't moved on.