I personally have thousands of Twitter accounts I control just for fun; I mostly use them for pranks, though in aggregate there are a few million followers between them (not sure how many duplicates there are, or, ironically, how many of their followers are bots). You can even buy software to create and manage Twitter accounts pretty easily in blackhat forums.
For mine, I scraped a bunch of Instagram pictures for photos, auto-generated a bunch of bios using a few basic parameters, e.g. "Beer lover, proud parent." Names were easy - the most popular first & last names from census records, mix and match. Grab a few lat/longs and convert them to the biggest US cities and you have a location, find some data source to tweet from (breaking news is easiest) and you have a fully automated, human-like Twitter account. For bonus points, Pick a random color toward the low ends of the hexadecimal range ( rand(a..c)++rand(0..f)++rand(a..c)++rand(0..f)++rand(a..c)++rand(0..f) works fine) and you even look like your page is personalized down to the color.
Start following random people and 10% follow back (even more if you follow people who are tweeting about similar keywords as you - kindred spirits I guess).
The only tricky part is making sure you don't cross lines with your IPs. You could buy/rent them privately, but you really only want to keep a few accounts (3-5) to each IP, so that gets expensive ($.75/IP/month) when you don't have a really good reason to use your accounts. You can scrape free listings for them, but those are nasty, slow, and can cause bans if Twitter decides to take down a whole range or if you are forced to switch IPs too quickly.
Device type, browser, etc. is easy to spoof.
Should you decide to, it's also really easy to change name, username, and profile picture of an account in the future. So if I wanted a few thousand Trump-supporting (or Trump-hating) sock puppets I could have them today.
If you don't want to buy/create/manage Twitter accounts yourself you can get access to what's called a "panel." A panel is basically an automated, coin-operated network of fake accounts that you can control at wholesale prices. Want 5,000 followers? Plug $1/1,000 followers into the panel, supply the username, and you'll have them in a couple of minutes. Or resell 5,000 followers for $25 and pocket the $20 difference. For example of a panel, see this ad on blackhatworld: https://www.blackhatworld.com/seo/the-biggest-smm-panel-yout.... Nothing special about this one, just the first I found when I googled. They're a dime a dozen.
I'm certain there are millions of fake accounts for every service imaginable.
For those not aware of the background, the author is a wizard from a secretive underground society of wizards known as the Familia Toledo; he and his family (it is a family) have been designing and building their own computers (and ancillary equipment like reflow ovens) and writing their own operating systems and web browsers for some 40 years now. Unfortunately, they live on the outskirts of Mexico City, not Sunnyvale or Boston, so the public accounts of their achievements have been mostly written by vulgar journalists without even rudimentary knowledge of programming or electronics.
And they have maintained their achievements mostly private, perhaps because whenever they've talked about their details publicly, the commentary has mostly been of the form "This isn't possible" and "This is obviously a fraud" from the sorts of ignorant people who make a living installing virus scanners and pirate copies of Windows and thus imagine themselves to be computer experts. (All of this happened entirely in Spanish, except I think for a small amount which happened in Zapotec, which I don't speak; the family counts the authorship of a Zapotec dictionary among their public achievements.) In particular, they've never published the source or even binary code of their operating systems and web browsers, as far as I know.
This changed a few years back when Óscar Toledo G., the son of the founder (Óscar Toledo E.), won the IOCCC with his Nanochess program: https://en.wikipedia.org/wiki/International_Obfuscated_C_Cod... and four more times as well. His obvious achievements put to rest — at least for me — the uncertainty about whether they were underground genius hackers or merely running some kind of con job. Clearly Óscar Toledo G. is a hacker of the first rank, and we can take his word about the abilities of the rest of his family, even if they do not want to publish their code for public criticism.
I look forward to grokking BootOS in fullness and learning the brilliant tricks contained within! Getting a full CLI and minimalist filesystem into a 512-byte floppy-disk boot sector is no small achievement.
It's unfortunate that, unlike the IOCCC entries, BootOS is not open source.
> The Rust or Haskell compilers, for example, insist on policing whatever private understanding you might have about the meaning of your code.
The thing about "private understandings" is that they're just another way of saying "we expect ceaseless, flawless vigilance" (as one writer put it), not to mention "flawless communication and training."
Languages which impose weird restrictions tend to do so because it allows them to offer (nearly) ironclad guarantees about something else.
There are certain programming idioms that work brilliantly in Haskell. Those same idioms would be utterly miserable in the presence of unrestricted mutation.
Or to take the author's other example, Rust forbids shared mutable state, and it keeps careful track of ownership. But I can freely use native CPU threads without worrying about anything worse than a deadlock, and I can bang directly on raw bytes without worrying about anything worse than a controlled runtime failure. And this remains true even if team communication occasionally fails or if someone makes a mistake.
Sometimes I want to rule out entire classes of potentially dangerous mistakes, and not just have a "private understanding" that nobody will ever make certain mistakes.
As always, it's a matter of using the right tool for the job. If you need to write a high-performance, heavily-threaded network server that parses malicious binary data, Rust is a great tool because of those restrictions. If you need to do highly exploratory programming and invent new idioms to talk about your problem domain, Common Lisp is awesome. And if you need to build libraries where everything has a rigid mathematical structure, Haskell is a great tool.
In my experience, Commony Lisp is a deeply opinionated language, and its most dramatic opinion is that "your code and your data should have identical representations and structures." And for the right problem, that restriction is extremely powerful.
There's a YC company that tries to make starting and scaling WebRTC super easy, which is far from trivial for a variety of clients/browsers or with 5+ participants simultaneously: https://www.daily.co
My biggest shock was how much "PR" was generated on Reddit, and how many sexworkers really do use the platform.
I knew it was a thing, I knew of the memes, but to see both sides in arms over a company vs branding, creating their own website and content - and vanity domain as well.
People really do just want a one click solution for creating adult content, and consuming adult content.
And the memes, I think they're pretty toxic, 4chan, incel, reddit, twitter memes - I never knew there was that much angst.
If you are building a database engine that strongly prioritizes performance, and Scylla does position itself that way, then C++ is the only practical choice today for many people, depending on the details. It isn't that C++ is great, though modern versions are pretty nice, but that it wins by default.
Garbage collected languages like Golang and high-performance database kernels are incompatible because the GC interferes with core design elements of high-performance database kernels. In addition to a significant loss of performance, it introduces operational edge cases you don't have to deal with in non-GC languages.
Rust has an issue unique to Rust in the specific case of high-performance database kernels. The internals of high-performance databases are full of structures, behaviors, and safety semantics that Rust's safety checking infrastructure is not designed to reason about. Consequently, to use Rust in a way that produces equivalent performance requires marking most of the address space as "unsafe". And while you could do this, Rust is currently less expressive than modern C++ for this type of code anyway, so it isn't ergonomic either.
C++ is just exceptionally ergonomic for writing high-performance database kernels compared to the alternatives at the moment.
The Steve Jobs quote on why Xerox failed [1] strikes again. The finance people have taken over Acti-Blizzard and they've been coasting for 10+ years on their original franchises. All we have is annual CoD releases and Blizzrad coasting on their old properties where Blizzrd hasn't had a significant original release in 10+ years.
This effort seems like it's part of the ruthless approach to controlling costs that slowly strangle a company from within.
I believe WoW is the #2 property (after CoD) at Acti-Blizzard and it's clearly changed from one of delivering a game to simply extracting as much money as possible from each customer much like how almost all mobile games do.
The state of California's complaint is bad. I mean really bad. The fact that 3-4 different people from AB all released different statements in the last week should tell you exactly how bad it is. That's classic panic mode. There should only be one.
This latest move tells you the company believes it will blow over and they're looking to do the minimal required to appease the detractors and get back to business as usual without having to pay people more or pay out a bunch of lawsuits.
Honestly, the heads of J Allen Brack and Bobby Kotick in particular should roll over this lawsuit.
EDIT: commenters have noted (correctly) that I overlooked Overwatch. This was a significant release but it seems to also have waned in popularity and Overwatch 2 is inextricably going in some weird PVE direction.
Other than that you have a poor received Diablo sequel, a series of lackluster to bad WoW expansions, a disastrous Warcraft 3 remaster and complete abandonment of the RTS genre that propelled them to success in the first place.
WC3 Reforged ("Refunded") was significant in that it was not only underwhelming and plagued with problems it made the original game worse with a forced download and loss of functionality.
The most significant change however was Blizzard not wanting a repeat of missing the MOBA boat with Dota 2 by adding a condition that all the IP for third-party maps belong to Blizzard, completely killing that ecosystem.
We are building ContainIQ (https://www.containiq.com/)! We provide Kubernetes native monitoring instantly with pre-built dashboards and easy to create monitors. A one-line install that takes 5 minutes to set up and it just works. By using eBPF we’re able to correlate kernel-level metrics with Kubernetes objects. Our current users are using our product to track and get alerted on things like p95/p99 latencies, Kubernetes jobs failing, pod evictions, among other things.
Shimmer (https://shimmer.care) offers guided video support groups for people struggling with their mental health. Shimmer matches members with shared identities and experiences and places them in small groups that meet for weekly support sessions along with a qualified peer coach. During the rest of the week, members have access to our Community Platform (a mobile app and web app) to leverage resources like community events, mood tracking, and gratitude journaling.
79% of young adults with mental health issues do not have access to care; the most common alternative, teletherapy, is expensive ($150/session), has significant churn (40% drop off after first visit), and lacks diverse representation (average age of 51 and 80% white). We've developed a curriculum incorporating material from expert group therapists at UCSF. This has led to a number of great outcomes including: providing care at a fraction of the cost of therapy ($50/month or $12/session), an 80% 4-month retention rate, and a diverse set of highly experienced facilitators that members can relate better to. If you're interested in trying Shimmer, you can sign up for a consultation call or a wellness workshop (both for free) directly on our website. Shimmer is led by three founders with extensive experience across healthcare and engineering. Having seen firsthand the severe effect that mental health issues have had on loved ones, we left previous roles (graduate programs at Berkeley MBA/MPH, UCSF MD and Salesforce SWE) to dedicate ourselves to improving the accessibility and affordability of mental health care.
An intensive Vipassana retreat is just that, an INTENSIVE retreat. I've meditated for ~10 years. I don't think anyone's FIRST meditation experience should be a 10-day Vipassana retreat. I don't think anyone's first silence experience should be a 10-day Vipassana retreat. That's like going to a Navy Seal bootcamp when you've never exercised in your life.
A lot of Buddhist or meditative practices also have close student-teacher relationships. You should have an active human coach guiding you through the process, because shit comes up, people have different mental health setups (just like a physical therapist has different workout plans for people with injuries and physical conditions) and if it's just DIY experimenting, you can encounter something and get screwed.
A good example of this is the Tim Ferris podcast where he did a 10-day Vipassana retreat and decided to fast for a certain number of days to amplify the effect and almost had a complete mental breakdown. It worked out and led to a very vulnerable podcast down the line (https://tim.blog/2020/09/14/how-to-heal-trauma/), but these are things to keep in mind when going into deep waters.
I was responsible for Stripe's API abstractions, including webhooks and /events, for a number of years. Some interesting tidbits:
Many large customers eventually had some issue with webhooks that required intervention. Stripe retries webhooks that fail for up to 3 days: I remember $large_customer coming back from a 3 day weekend and discovering that they had pushed bad code and failed to process some webhooks. We'd often get requests to retry all failed webhooks in a time period. The best customers would have infrastructure to do this themselves off of /v1/events, though this was unfortunately rare.
The biggest challenges with webhooks:
- Delivery: some customer timing out connections for 30s causing the queues to get backed up (Stripe was much smaller back then).
- Versioning: synchronous API requests can use a version specified in the request, but webhooks, by virtue of rendering the object and showing its changed values (there was a `previous_attributes` hash), need to be rendered to a specific version. This made upgrading API versions hard for customers.
There was constant discussion about building some non-webhook pathway for events, but they all have challenges and webhooks + /v1/events were both simple enough for smaller customers and workable for larger customers.
Tax patents. Progressively. Start from say $USD 1000 per year and double each year until the patent holder decides to not pay and gives the patent to the public domain. Feel free to add a couple of free years in the beginning or tweak some other parameters to suit needs of different industries.
I wonder if a system where keeping a patent active requires paying an exponentially increasing annual fee would help. So anyone could file a patent for say $1,000 in the first year, but then it would cost $2,000, $4,000, $8,000, etc. to keep it active in subsequent years.
That way, it would be financially infeasible for even the largest companies to hoard patents for years, but an individual inventor could start small and pay the fees as their business grows.
Part of the problem has been that, the way the system is currently set up, after an inventor invents something genuinely novel but with a narrow focus, their company's patent lawyers massage the claims language to make the patent much broader, sometimes effectively claiming any possible way to solve the problem instead of just claiming the specific invention. These ridiculously overbroad patents are the ones that are most valuable to trolls, and it is only the claims that matter.
I don’t hear much about it these days, but circa 2004 when I was going into computer science I had multiple teachers and a guidance counselor warn me that all the jobs would be going to India and tell me I should pursue something else. This seemed to be very much the prevailing wisdom of the time. I pursued it anyway as it was my passion.
I was one of five students in my program that had been a huge multi-campus program only a couple years earlier. We were the last cohort before the college discontinued the program entirely, and I was the only one to graduate. What I found however for maybe five years after graduation was a insanely high demand for developers.
There was genuinely a generation that was so strongly discouraged from becoming developers that there were very few. Seems to me like the folklorists have largely missed this.
Humans have a larger neocortex compared to other mammals, which gives them more storage, working memory, and computational potential than other animals.
The curses of the human condition are anxiety and regret, the conditions that render people unable to engage in the present moment. The former is characterised as being stuck in the future psychologically, and the latter is being stuck in the past. Since neither the future or the past can be changed by obsessive thoughts in the present, these psychological conditions are a recipe for unhappiness, because happiness can only ever happen in the present moment, which is lost.
What characterises flow state and mindfulness is total absorption in the present moment.
What if animals that are not burdened by human quantities of neocortex, such as insects and canines, which exist in the present moment, are actually experiencing flow and mindfulness?
The spider weaves its web in perfect concentration, not stopping to contemplate, plan, or exercise executive judgement. Might not that be akin to flow?
The dog and the cat and the cow and the horse, given physical comfort, do not seem to worry about the future or regret the past. Might they not be blessed with a form of mindfulness that humans in the rat race rarely manage to experience?
First, I deeply appreciate that so many on Hacker News have come out for this. Enough to awaken me from a sound sleep on a Tuesday evening!
I don't really care that much about selling Klein bottles over Amazon - it's mainly to reach parents over the holidays. But I do wish that Amazon would do something about this kind of thing.
Finally, I"m very low on stocks of glass Klein bottles. It's weird for me to ask my friends not to buy the things I've worked so hard to make, but I guess I'd better. I hope to have more manifolds in mid to late summer.
Warm wishes all around,
-Cliff (way late on a cloudy Tuesday evening in Oakland)
You have an asset (doesn't matter what it is). It's going up because the fundamentals improved, or because favorable press, or whatever. That's not a bubble. Stuff goes up all the time.
So people start buying it, because it's going up, and they want in on the action. That's still not a bubble. People buy stuff that's going up all the time.
Now there's new money flowing into that asset. So now the price goes up because of the new money flowing in, and the new money was flowing in because the price was going up. Now it's a bubble. But it's not dangerous yet. People can lose their shirts, but it won't hurt the overall economy.
It becomes a dangerous bubble when (lots of) people invest in the asset with borrowed money. Now if it crashes, it can take banks with it. If that happens at a large enough scale, you damage the economy as a whole.
To me this editor gets it right in the sense that we don’t need to get rid of code as the “nocode” movement is trying. Instead we need to make coding more enjoyable and figure out ways to make it more interactive.
One specific thing Utopia addresses to me is the need for the code and the actual thing to be treated more as one single interactable component and not two separate things.
Instead we're treating the thing as a one-way compile step. There's no way to sync the DevTools in-memory changes we make to the DOM with the actual code.
The fact that Utopia allows the two things to be treated as one is a huge step towards making webdev more enjoyable.
And they’re following good steps… SwiftUI’s editor is very similar in this regard. Using the code as the main thing but having all kinds of interactable components around it that make writing code simpler with cool visual autocomplete widgets & visual aids.
Before with direct DOM changes building something like this was impossible but now with the React paradigm is seems natural to have this sync between code and visuals.
When you've travelled 30 times in a row with near-perfect QoS, your expectations get recalibrated. It's an ironic effect of success that your customers become less tolerant of failure.
It's the Louis C.K. bit about being (roughly) 'in a chair in the sky at 500mph: it's a miracle yet no one's happy'.
Disclaimer: I'm a competitor of Okay along with others in the software development metrics space.
I just want to comment on
> We also learned that the discussion about engineering metrics always falls into a false dichotomy: don’t measure anything because engineering is creative work (it is!) or measure engineers in intrusive ways along meaningless dimensions like lines of code.
I think with close to 50 years of doing things wrong with software development metrics, we've left a very bitter taste in the mouths of developers and it is fully understandable that developers would be weary and skeptical of software development metrics. It is certainly one sided and I do agree this false dichotomy needs to be addressed.
When it is all over, if software development metrics is done right (with the emphasis on done right), developers should be the ones advocating for it, since it means:
- They can work more efficiently since software metrics can help them better understand how a piece of code came to be
- Better sell themselves for promotions and raises. For example, they can use it to highlight impact and what it means if they leave. Their manager may know they are a top contributor but if their manager can't sell them, it won't help. With software metrics, manager's should be able to highlight how their developer is having an impact when the raise/promotion pool is divided up.
- And so forth
I honestly think the best way to get everybody onboard with metrics, is to clearly show that it takes effort to generate meaningful insights. And this is why I'm not so much focused on providing canned reports, but rather, I want to provide business intelligence for the software development lifecycle.
The goal (which it sounds like Okay is working towards as well) is to connect all the dots in the software development lifecycle and provide users with the necessary data to make informed decisions. In the business world, we have "business intelligence specialist" because nobody takes for granted how difficult it is to get business insights. And it is truly baffling how we don't have "software development specialist" to help us interpret efficiency and productivity as context matters and not everybody is qualified to interpret development metrics.
If only so many people didn't treat land and houses like investments, there wouldn't be such a disastrous hoarding problem. Henry George's Land Value Tax [1][2] seems like an obvious economic fix that would require a large fraction of the population to change how they think about land and natural resources, but most of them are invested in such a way (i.e. owning multiple houses on multiple parcels of land, often without renting anything out) that it is in their economic self-interest to block any such proposals. Although, I suppose this situation isn't unique. The world has long been doomed by coordination problems.
Here's why investors buy up houses: they know your neighbors will do the dirty work of artificially constraining the housing supply, which makes it a good investment.
Here's one who comes right out and explains this:
> Meanwhile, local opposition to building is so commonplace and the approval process so cumbersome, time consuming, and expensive, even when a proposed project complies entirely with requirements, approvals are not forthcoming, at least in an expeditious manner and needed supply is simply not provided. Recently I heard of a new acronym to add to my vocabulary: CAVE, Citizens Against Virtually Everything, to be added to NIMBYISM and BANANA (Build Absolutely Nothing Anywhere Near Anyone).
Support groups like https://yimbyaction.org/ if you want to 'stick it to the investors'. If there's a credible threat to build plenty of housing, they'll move on.
> Wade devoted a full section to the “furin cleavage site,” a distinctive segment of SARS-CoV-2’s genetic code that makes the virus more infectious by allowing it to efficiently enter human cells.
> Within the scientific community, one thing leapt off the page. Wade quoted one of the world’s most famous microbiologists, Dr. David Baltimore, saying that he believed the furin cleavage site “was the smoking gun for the origin of the virus.” Baltimore, a Nobel Laureate and pioneer in molecular biology, was about as far from Steve Bannon and the conspiracy theorists as it was possible to get. His judgment, that the furin cleavage site raised the prospect of gene manipulation, had to be taken seriously.
Furin cleavage sites have evolved and are present in multiple coronaviruses:
- HCoV-OC43 (infects humans)
- HCoV-HKU1 (infects humans)
- MHV-A59
- ChRCoV-HKU24
- BtCoV-ENT
- BtNeCoV-PML-PHE1
- BtCoV-HKU4
- BtCoV-HKU5
- MERS-CoV
- BtHpCoV-Zhejiang2013
- SARS-CoV-2
Phylogenetic analysis suggests that it has evolved independently at least 6 times that we know of.
After that article was published a team in Thailand found furin cleavage sites in sarbecoviruses closely related to SARS-CoV-2 called RacCS203 (91.5% similarity to SARS-CoV-2) and RmYN02 (93.3% similarity to SARS-CoV-2)
There is a long list of folks who I'll never meet, but for whom I'd love to buy a beer, and Meyer is at the top.
After the dot-com crash I found myself coding again, and fell into CSS with his book, and managed to carve out a spot for myself as a bit of an early CSS guru in my professional network. Doing that saved my house, quite literally.
CSS can be gnarly and frustrating. This was even MORE true back then when you basically had to create parallel implementations of the same layout to support IE's willfully wrong interpretations of the box model (oh, and its bugs). But it beat the everliving crap out of what came before.
I've long since left that part of my career -- I mostly talk and tell people what to do now -- but I'll always love CSS a little for that period of my life.
This, my friends, is how business travel becomes nearly irrelevant.
This is a beautifully executed idea and if the demos live up to expectation the hype may even be warranted. But on a much more fundamental level (i.e. fancy 3D imaging and spatial audio aside), this also possibly suggests people would benefit from dedicated videoconferencing hardware. TVs and telephones do one thing really well (or at least historically they did), which is why even my legally blind grandpa could call his friends or watch^W listen to the news. There's a market for having a plug-and-play videophone now that we have the software to go inside it.
For mine, I scraped a bunch of Instagram pictures for photos, auto-generated a bunch of bios using a few basic parameters, e.g. "Beer lover, proud parent." Names were easy - the most popular first & last names from census records, mix and match. Grab a few lat/longs and convert them to the biggest US cities and you have a location, find some data source to tweet from (breaking news is easiest) and you have a fully automated, human-like Twitter account. For bonus points, Pick a random color toward the low ends of the hexadecimal range ( rand(a..c)++rand(0..f)++rand(a..c)++rand(0..f)++rand(a..c)++rand(0..f) works fine) and you even look like your page is personalized down to the color.
Start following random people and 10% follow back (even more if you follow people who are tweeting about similar keywords as you - kindred spirits I guess).
The only tricky part is making sure you don't cross lines with your IPs. You could buy/rent them privately, but you really only want to keep a few accounts (3-5) to each IP, so that gets expensive ($.75/IP/month) when you don't have a really good reason to use your accounts. You can scrape free listings for them, but those are nasty, slow, and can cause bans if Twitter decides to take down a whole range or if you are forced to switch IPs too quickly.
Device type, browser, etc. is easy to spoof.
Should you decide to, it's also really easy to change name, username, and profile picture of an account in the future. So if I wanted a few thousand Trump-supporting (or Trump-hating) sock puppets I could have them today.
If you don't want to buy/create/manage Twitter accounts yourself you can get access to what's called a "panel." A panel is basically an automated, coin-operated network of fake accounts that you can control at wholesale prices. Want 5,000 followers? Plug $1/1,000 followers into the panel, supply the username, and you'll have them in a couple of minutes. Or resell 5,000 followers for $25 and pocket the $20 difference. For example of a panel, see this ad on blackhatworld: https://www.blackhatworld.com/seo/the-biggest-smm-panel-yout.... Nothing special about this one, just the first I found when I googled. They're a dime a dozen.
I'm certain there are millions of fake accounts for every service imaginable.