Hacker Newsnew | past | comments | ask | show | jobs | submit | BillFranklin's favoriteslogin

The amount of public WiFi's (including in-flight ones) I've bypassed by running a vpn server on udp port 53 is honestly insane. Sadly, this is becoming less commonplace many captive portals don't allow any egress at all aside from the captive portal's IP - but alas - still impressive how many are susceptible. It also bypasses traffic shaping (speed limiting) on most networks that are publicly accessible even if they do require some kind of authorization to enable external accessibility.

Highly recommend softether as they give you juicy Azure relay capability for free which is allowed in more "whitelist only" networks than your own vps server.

Haven't gone so far as to enable iodine for actual two-way dns communication through a third party DNS resolver, but that would probably work in more cases than this, albeit slower.


Worth reading Deep Packet Inspection is Dead: https://security.ias.edu/deep-packet-inspection-dead-and-her...

This tool is great, but I religiously route all my traffic through a VPN that I own and control. I’ve hardened the box I use to have zero logs and I don’t need to blindly trust a commercial provider whether they’ve been audited or not. There’s no way of really knowing they’re not logging in some capacity bar from being physically in their server room and inspecting their setup.

Add to the VPN a DoH resolver that I own and control too, and it makes things even better. I also block port 80 on my machine as an extra measure. No need to be using port 80 in this day and age except for captive portals which I rarely ever have to use.


> If it is, as you claim, permissible to train the model (and allow users to generate code based on that model) on any code whatsoever and not be bound by any licensing terms, why did you choose to only train Copilot's model on FOSS? For example, why are your Microsoft Windows and Office codebases not in your training set?

This is my favorite question about Copilot ever.


I highly recommend reading the Linux kernel management style guide at https://www.kernel.org/doc/html/latest/process/management-st....

It has this gem of a quote:

> It helps to realize that the key difference between a big decision and a small one is whether you can fix your decision afterwards. Any decision can be made small by just always making sure that if you were wrong (and you will be wrong), you can always undo the damage later by backtracking. Suddenly, you get to be doubly managerial for making two inconsequential decisions - the wrong one and the right one.

It goes on to mention that most technical decisions fall inline as small changes and why.


I was trying to register this too in 1994. I guess it was a race between several of us to register stupid domains. I was also bending their ears to get them to let me register the remaining single-character dotcoms. It is heartwarming to see the Internic form on that page - you used to type up the text file and email it to them and wait. You could have pretty much any domain you wanted in 1994. Except fuck.com. Also, they were FREE. I lost all mine once they introduced the $200/year registration fee, after, I believe, Unilever sent them 19,000 registration requests for each of their trademarks.

Despite domains being free, most of the web sites I would visit were simply hosted on IPs. I had a big notebook next to my PC with all the IPs written down. That was my DNS in 1994.


Fun fact and one of my dinner party anecdotes; I have the accepted answer for one of Ross Ulbricht’s (Silk Road’s Dread Pirate Roberts) SO questions that got him busted.

https://stackoverflow.com/questions/9563675/destroying-a-spe...


The rubber band trick [1] really helped me. Took my unlocks-per-day from ~100 to ~20. I took it off after about 2 weeks and it's more or less stayed that way.

1: https://www.independent.co.uk/life-style/gadgets-and-tech/mo...


In the mid nineties I worked in a research institute. There was a large shared Novell drive which was always on the verge of full. Almost every day we were asked to clean up our files as much as possible. There were no disc quota for some reason.

One day I was working with my colleague and when the fileserver was full he went to a project folder and removed a file called balloon.txt which immediately freed up a few percent of disk space.

Turned out that we had a number of people who, as soon as the disk had some free space, created large files in order to reserve that free space for themself. About half the capacity of the fileserver was taken up by balloon.txt files.


> This journey began some 27 years ago. Amazon was only an idea, and it had no name.

It had a name, and that name was "Cadabra".

It didn't become Amazon until Jeff watched a documentary about the Amazon River. His lawyer had already turned up his nose at "Cadabra", and Jeff was looking for something else.

It's also worth noting that the idea didn't grow over time - Jeff always intended to build something like "Sears for the 21st century". The bookstore was just the way in, not the long term plan.

ps. amazon employee #2


That system was the backend of the functionality currently known as Facebook Messenger, not WhatsApp.

One might note, snidely [1], that three-four years after freezing development of an Erlang-based messaging system, starving maintenance work on that system of engineering effort, devoting massive engineering resources to a from-scratch C++ rewrite, and in the meantime blaming the language for relatively minor system design issues that could have been improved with a fraction of the effort (including, but not limited to, Erlang's ability to wrap allegedly critical C++ components) … Facebook plowed $19B into the acquisition of an Erlang-based messaging system.

[1] as a main author of the Facebook Chat version written in Erlang


This is actually funny, because I was involved with the creation of this list, way back in 2004. The whole thing started as a way to stop phishing.

I was working at eBay/PayPal at the time, and we were finding a bunch of new phishing sites every day. We would keep a list and try to track down the owners of the (almost always hacked) sites and ask them to take it down. But sometimes it would take weeks or months for the site to get removed, so we looked for a better solution. We got together with the other big companies that were being phished (mostly banks) and formed a working group.

One of the things we did was approach the browser vendors and ask them if we could provide them a blacklist of phishing sites, which we already had, would they block those sites at the browser level.

For years, they said no, because they were worried about the liability of accidentally blocking something that wasn't a phishing site. So we all agreed to promise that no site would ever be put on the list without human verification and the lawyers did some lawyer magic to shift liability to the company that put a site on the list.

And thus, the built in blacklist was born. And it worked well for a while. We would find a site, put it on the list, and then all the browsers would block it.

But since then it seems that they have forgotten their fear of liability, as well as their promise that all sites on the list will be reviewed by a human. Now that the feature exists, they have found other uses for it.

And that is your slippery slope lesson for today! :)


This reminds me of a Burning Code celebration my team once had at ocean beach in SF.

We'd been slowly migrating from Angular 1.X to React (internally: the Angularpocalypse) for a few years and we'd finally migrated over our last few pages. The result was about 100k lines of JS and Rails code that could be safely deleted in a single PR. It had been such a long slog, though, that we felt the team deserved some catharsis.

We took a team-offsite day to gather on a nearby beach and burn the deleted code. In the interest of not wasting that much paper, we burnt a complete list of the deleted files in super-tiny font on a couple pages. We also each grabbed our least-favorite areas of the codebase to print out, including several dramatic readings. My selection was a section of code from about 4 years prior with a comment like //TODO: replace this asap.

I highly recommend it to anyone facing a long, clearly-delineated migration. Gift your old, shameful code to the flame.


One reason for the explosive interest in service mesh over the last 24 months that this article glosses over is that it's deeply threatening to a range of existing industries, that are now responding.

Most immediately to API gateways (eg. Apigee, Kong, Mulesoft), which provide similar value to SM (in providing centralized control and auditing of an organization's East-West service traffic) but implemented differently. This is why Kong, Apigee, nginx etc. are all shipping service mesh implementations now before their market gets snatched away from them.

Secondly to cloud providers, who hate seeing their customers deploy vendor-agnostic middleware rather than use their proprietary APIs. None of them want to get "Kubernetted" again. Hence Amazon's investment in the very Istio-like "AppMesh" and Microsoft (who already had "Service Fabric") attempt to do an end run around Istio with the "Service Mesh Interface" spec. Both are part of a strategy to ensure if you are running a service mesh the cloud provider doesn't cede control.

Then there's a slew of monitoring vendors who aren't sure if SM is a threat (by providing a bunch of metrics "for free" out of the box) or an opportunity to expand the footprint of their own tools by hooking into SM rather than require folks to deploy their agents everywhere.

Finally there's the multi-billion dollar Software Defined Networking market - who are seeing a lot of their long term growth and value being threatened by these open source projects that are solving at Layer 7 (and with much more application context) what they had been solving for at Layer 3-4. VMWare NSX already have a SM implementation (NSX-SM) that is built on Istio and while I have no idea what Nutanix et al are doing I wouldn't be surprised if they launched something soon.

It will be interesting to see where it all nets out. If Google pulls off the same trick that they did with Kubernetes and creates a genuinely independent project with clean integration points for a wide range of vendors then it could become the open-source Switzerland we need. On the other hand it could just as easily become a vendor-driven tire fire. In a year or so we'll know.


Not really sure what your central question then is: The general theme for building a recommender system architecture is:

1. Decide whether you are okay with a batch approach or an online learning approach or a hybrid.

2. Start simple with a batch approach (similar to what you are doing):

a) Get features ready from your dataset (assuming you have interaction data) : Pre-processing via some big data framework (Map Reduce, Data flow etc)

b) Build a vector space and nearest neighbors datastructures.

c) Stick both into a database optimized for reads

d) Stick a service in front of it and serve.

Once you are happy with 2, you can try out variations involving either online updates to your recommender system which involves changes to the type of database you might want to optimize. etc


I once mentioned the rapper xxxtentacion in a text message. Now whenever I sign of an email or text to my wife with an ‘x’ iOS decides that what I actually meant to do was write ‘xxxtentacion’ and I’ve normally pressed send before I notice. After a couple of years now this is just how I talk to my wife.

How to cook for yourself, really, really good food. I no longer crave restaurant food, and all of the really important things I learned about cooking take just the time to read it, hear about it and then try it. All without any special hardware.

A few examples:

1. Cooking jasmine rice: rinse it first, 1 c. water to 1 c. rice ratio. Bring to boil, turn down heat to lowest setting. Leave lid /the entire time/. Fluff the rice (look this up) when done. (about 12-15 min of cooking)

2. Baking a cake: (any square pan yellow cake) Read how baking powder actually works, then you realize you need to mix and bake quickly. Letting it sit before baking will make a flatter cake. Also, stick a butter knife in the middle to test when it's done, if it comes out with batter stuck on it, it needs a few more minutes.

3. Eggs: When frying, scrambling, put the eggs in warm water before cracking to make them room temperature first. They cook better this way.

4. Chocolate syrup: 1 c. water, 1 c. cocoa power, 1 c. sugar, 1/2 tsp vanilla, 1/2 tsp salt. Blend it in a blender. (sealed container works best, as it's messy) Better than store bought, super cheap, use organic if you like...

etc...

Why is this valuable? Because I am no longer tempted to waste money at restaurants any more, or buy unique expensive organic products (because I can make them now). I feel incredibly free and liberated that I get food at home that tastes better than what is at a restaurant now. (for about 90% of the stuff I like)

Also, I can teach my kids, and they start life with these skills. Great question, way too many things to write down...


I once worked at Chase Manhattan Bank and one of their internal networking teams had a web site for wiring requests. They didn’t want to work too hard so their UI was designed to make data entry as slow as possible, mostly by using huge multi-level drop down lists where the slightest twitch would make them collapse and you would have to start over navigating through them, repeat a dozen times for every run of cable, several runs required to make an end to end connection. It wasn’t custom programming, just taking full advantage of the browsers of the era’s inability to render the UI component for that. So I was building out a data center and needed Something like forty thousand cables run which translated into around one hundred and fifty thousand segments. I tried to give this info via a spreadsheet but they were steadfast that the web interface was the only way they could receive it. So I wrote a script to just post the data directly without going through the UI, ran it, and went home. Turns out all their web form did was e-mail the values to a half dozen people. The e-mail system was Lotus Notes (dates this) so each person got their own copy and there was a lot of overhead. The sudden influx of a million e-mail messages brought down Chase’s email system for two thirds of the country. They spent days clearing the mail queue and recovering - they had to fly in IBM techs with suitcases full of disk drives to add the storage needed. Everyone who received the wiring requests spent days deleting them with new ones arriving as quickly as they deleted the old ones. Then when things were finally normal again they asked me to resend them my spreadsheet.

I worked at Twitch from 2014 to 2018. I was never on the video team, but here are some details that have changed.

Video:

- Everything is migrated from RTMP to HLS; RTMP couldn't scale across several consumer platforms

- This added massive delay (~30+s) early on, the video team has been crushing it getting this back down to the now sub-2s

- Flash is dead (now HTML5)

- F5 storms ("flash crowds") are still a concern teams design around; e.g. 900k people hitting F5 after a stream blips offline due to the venue's connection

- afaik Usher is still alive and well, in much better health today

- Most teams are on AWS now; video was the holdout for a while because they needed specialized GPUs. EDIT: "This isn't quite right; it has more to do with the tight coupling of the video system with the network (eg, all the peering stuff described in the article)" -spenczar5

- Realtime transcoding is a really interesting architecture nowadays (I am not qualified to explain it)

Web:

- No more Ruby on Rails because no good way was found to scale it organizationally; almost everything is now Go microservices back + React front

- No more Twice

- Data layer was split up to per-team; some use PostgreSQL, some DynamoDB, etc.

- Of course many more than 2 software teams now :P

- Chat went through a major scaling overhaul during/after Twitch Plays Pokemon. John Rizzo has a great talk about it here: https://www.twitch.tv/videos/92636123?t=03h13m46s

Twitch was a great place to spend 5 years at. Would do again.


With a great manager you get lots of work done and feel good about yourself and your accomplishments. You get the right amount of recognition for your work (not more or less than you want). You understand what's expected from you and you feel free to express yourself.

The really difficult thing about this question is that it's not a 1 way street. If you don't work hard and have a good attitude, it will be hard to achieve anything. You need to have realistic expectations of how others will perceive what you do. You need to be able to balance the needs of others with your own needs (not too selfish and not too selfless). Your manager can help you with those things, but they can't actually do it for you.

I've been in a bad place at work many times in my career. The most important thing to ask yourself is: is it me, or is it my environment (including your manager)? Try to rule out as many of the "is it me" scenarios as you can. Try to put yourself in a good place. If you hit a wall where you are thinking, "I'm trying to do X, but Y is getting in my way and there is nothing I can do about Y", then you can see where the problem is. After you've "levelled yourself up" as much as you can, if you still feel constrained, then it's probably good to look for another place to go. I usually advise more junior people to stay in a job (even if it is not ideal) until they get to that point. It's easy to say, "That manager sucks! I can't work with them," and fly out the door having learned nothing. If you do that you run the risk of doing it over and over and over again.

When things start to work well, the thing you will hopefully notice is that it isn't just you. You can't perform to your maximum ability without a great manager (if you are in a job where a manager helps). Similarly, you can't perform to your maximum ability without working well with your coworkers. When it clicks, make sure to spend some time appreciating what those other people do for you. Everybody is different and I can't tell you exactly what it will be for you. The key is to work hard so that when you are in the situation where you can excel, that you are up to the task.


When I was a child in elementary school, a standards committee decided to change my native language.

They replaced the spelling of most words, and many grammatical rules.

We were forced to obey these changes, any use of the old rules was counted as mistake in school.

Back then, many older books were still using the old rules.

By the time I left high school, almost no books with the old rules were left. All had been reprinted. All newspapers had switched. Autocorrect programs had been updated with the new rules as well.

In a matter of 8 years, an entire language had changed its orthography and parts of its grammar, top-down, and it worked out fine.

I’m sorry, if an entire human language with 120 million speakers can be updated top-down like that, a web spec can as well.


I'm 60+. I've been coding my whole career and I'm still coding. Never hit a plateau in pay, but nonetheless, I've found the best way to ratchet up is to change jobs which has been sad, but true - I've left some pretty decent jobs because somebody else was willing to pay more. This has been true in every decade of my career.

There's been a constant push towards management that I've always resisted. People I've known who have gone into management generally didn't really want to be programming - it was just the means to kick start their careers. The same is true for any STEM field that isn't academic. If you want to go into management, do it, but if you don't and you're being pushed into it, talk to your boss. Any decent boss wants to keep good developers and will be happy to accomodate your desire to keep coding - they probably think they're doing you a favor by pushing you toward management.

I don't recommend becoming a specialist in any programming paradigm because you don't know what is coming next. Be a generalist, but keep learning everything you can. So far I've coded professionally in COBOL, Basic, Fortran, C, Ada, C++, APL, Java, Python, PERL, C#, Clojure and various assembly languages each one of which would have been tempting to become a specialist in. Somebody else pointed out that relearning the same thing over and over in new contexts gets old and that can be true, but I don't see how it can be avoided as long as there doesn't exist the "one true language". That said, I've got a neighbor about my age who still makes a great living as a COBOL programmer on legacy systems.

Now for the important part if you want to keep programming and you aren't an academic. If you want to make a living being a programmer, you can count on a decent living, but if you want to do well and have reasonable job security you've got to learn about and become an expert in something else - ideally something you're actually coding. Maybe it's banking, or process control, or contact management - it doesn't matter as long as it's something. As a developer, you are coding stuff that's important to somebody or they wouldn't be paying you to do it. Learn what you're coding beyond the level that you need just to get your work done. You almost for certain have access to resources since you need them to do your job, and if you don't figure out how to get them. Never stop learning.


A piece of simple, concrete, implementable advice I read recently that I really like is "communicate in the language of options, not demands, imperatives, requests, or preferences."

Imagine yourself having a conversation with your manager or someone in a relatively similar position about a decision or suggestion you have. How often do you say things like "we have to", "we need to", "we really should", "can we", or "I'd like to"? The problem with all of these statements, from the perspective of someone like your manager, is that they hint at a passion or personal preference that doesn't belong in a decision-making process or conversation. In the flow of a conversation, it may feel like you're merely emphasizing a point and bringing your (presumably solid) technical experience to the table, but to the listener it often makes your suggestions appear self-serving, poorly considered, and possibly irrational from the perspective of the business.

Try to replace the above phrases with "one option is to..." and change your thinking to complete the thought in a way that makes sense. Avoid picking favorite solutions and presenting them as the clear winner or only option. It'll get you thinking in terms of pros and cons and how to persuade other people by appealing to their point of view. It will make you appear more level-headed and capable of making decisions that are good for the business. It'll also remind you of (and force you to fully consider) the frequently hidden option that's often forgotten when presenting a new idea or direction, which is the status quo.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: