Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Having respect for the skills of a worker and also believing that they are possible to automate are not mutually exclusive. It's no different from people saying the same thing about chess and chess players, or go and go players, before the problem was solved.

And it's not just blue collar workers that are a target of this idea. We are already starting to see automated programming moving out of the research lab and into the commercial realm, e.g. GitHub copilot.



We are already starting to see automated programming moving out of the research lab and into the commercial realm, e.g. IBM's "FORTRAN Automatic Coding System", whose name is an abbreviation of "FORmula TRANslator". This is an enormous effort. John Backus, a longtime proponent of such "automatic programming" systems who is leading the project, reported in 1955 that in its first edition, in early 1956, "FORTRAN" is expected to include eight to ten thousand instructions. It will be distributed to all lessees of the IBM 704 high-speed electronic digital computer in 1957. Though many programmers are skeptical of the quality of programs produced by the so-called "compiler", experience has shown that it only takes 2-3 days to learn, and the programs output by "FORTRAN" are often better than those written by expert programmers!

One of the great improvements in the second edition of FORTRAN, FORTRAN II, is that an application program can be written not as the output of a single compilation, but of many separate compilations.

[The above is liberally quoted and reorganized from the IEEE Annals of the History of Computing, 6(1), 01984.]


> We are already starting to see automated programming moving out of the research lab and into the commercial realm, e.g. GitHub copilot.

As soon as GitHub was acquired by Microsoft I knew their intentions were for automated coding tools. I wasn't concerned about this affecting my livelihood in the short to medium term, because state-of-the-art ANNs won't be able to grasp the context of business requirements without developing adult human level intelligence. Thus, even a full program built by such a system would need to be verified by a human to be sure it will behave as desired, obviating any gain in terms of automation. Even the rudimentary boilerplate that copilot spits out suffers from this problem.


I started thinking about how you might automate game development, and I think it's a pretty good thought excercise on the topic.

I feel like the end result is just what we already have, game engines with visual programming and drag and drop editors. You don't add much value by automating the programming since you still have to define the input and outcome.

The real win was building the complex logic and state editor into a good UX.


This video really amazed me in terms of automated game development. My fantasies are something like this and GPT-3 plus a couple decades of progress.

I think we'll get to genuinely open ended sand box games.

https://youtu.be/udPY5rQVoW0


Predicting the next frame from previous frames and learnt sequences is a neat trick, but I think automated game development is already very possible with simple techniques like genetic algorithms, or even a PRNG. I mean Rogue and it's descendants are very much automated game development, but GPT-3 could be useful for something like dynamic quest generation, world-building, narrative, adaptive NPCs (including dialogue) etc.


I'm not sure classic procedural content generation like in rogue-likes is all that comparable to using a GAN to run the whole game?

Have a look at the video, it's quite impressive.


I've seen it before, and I've a basic understanding of GANs, I just don't see it being overly useful. This technique can make a really blurry simulacrum of an actual game, and that's really cool, but I'm not sure how it could be used to make a something both truly novel and coherent. There's plenty of low hanging fruit for AI within an engine, whereas using AI to be the entire engine is somewhat infeasible


You are right about the technique not being very useful as of today. My fascination stems from my assumption that more resources poured into this approach would yield vastly better results.

Even just watching the video, I came up with several possible improvements to try out. Eg adversarial training, that would really hone in on the situations and aspects where the model is weak so far, like edge conditions; instead of just using normal gameplay as input.


It's definitely an interesting area of research, but for that example you still had to make the whole game in the first place in order to have something to train the model on. Say you have a novel game idea, how could you use that approach to make it a reality? I'm not sure you could, but like you mention it's a really early example and who knows where it ends up.

The other part about that GAN Theft Auto example is that it doesn't actually know what's going on, like there's no game state. All it knows is that "When I have a frame that looks like this, and they press that button, I think the next frame would usually look like this". So it's got no internal game logic, it's just really good at painting what games look like.


About the first one:

Even going about this very naively, you could at least use it to train a model against a supercomputer running the game, and then run the inference on much more modest end-user machines.

But you can be much more ambitious: have you seen eg style transfer? So you could probably do a bit of ML black magic to train your model on GTA, and then point it at the Google Earth data to get a GTA-like set in real-life London.

Or you could use something like style transfer to go for a cartoony look, or add ray-tracing like effects, even if you didn't have these effects in your original engine.

Or you can use a pre-trained model (eg on GTA), and then spend a relatively modest amount of extra training to get a different kind of game, eg one that has magic or so.

About the latter part: I do think their model is already running with some state. But even if it ain't, that's a relatively small thing to add with already known standard techniques (or you can come up with new techniques.)


And this is where I think of the game-side of games programming to be more an art form (I say this as an ex-professional game developer). GPT-3 could be used for narrative generation, environment generation, and adaptive AI, and these are all exciting areas for research and experimentation. But as you say, underneath is a solid engine, and one day an AI could feasibly build an engine from scratch, but I think that day is decades away. The AI that we have today are just toys.


I think it is different from people saying we couldn't automate chess or go. The difference is between a purely data domain, chess or go, where it can both be translated 1-1 into a computer simulation of the game AND the inputs are data.

I realise that you could define everything as data - laying a brick, you take the inputs of where to position the brick, etc. However I think we can make the distinction between chess where the data is "Pawn is on e4" and the much greater complexity of the real world where we are dealing with billions of atoms. Perhaps not everyone agrees with me.


A big part of ML in robotics is actually making nearly 1:1 simulations of the world and the actuators.

It's fairly sucessful. We can simulate for example driving a car really well.

Simulating human behaviour is harder, but simulating brick laying is not that hard, we have the technology to do so already.


I suppose it all depends on the abstractions you make and how well those abstractions hold in the real world. Humans of course make abstractions but many of these are done subconsciously.

Simulating brick laying might be able to be done in a controlled environment, is it possible to make it low cost enough and accurate enough for all general purpose brick laying situations? Probably, given enough investment we could get closer. Is ironing out all the nuances cost effective? I don't know.

We can definitely simulate a driving environment but I given the recent struggles of self driving cars I don't think I would say that we are at the point where we've solved the problem of actually driving them in everyday situations.


The biggest issue with self driving car isn't actually getting the car where we want it to go, it's predicting human behaviour and dealing with unseen conditions. But the actual physics of driving cars, yeah we've gotten them down beyond everyday situations.


>We can simulate for example driving a car really well.

Can you qualify this more specifically? In many domains (particularly safety critical ones) “reasonably well” may not be sufficient


Car simulators are accurate enough that F1 drivers drive more in simulators than in practice laps. They are very accurate. More than well enough to train a model to drive, for example, reasonably fast. Of course the real world is always different even if simply because the conditions are different, so you keep some headroom.


I think this has more to do with cost and convenience more than being a better representation of what it's actually like driving on the extremely varied state of a track. You can see this with the practice laps being extremely important because of the way the tire compositions, weather and car set ups change the dynamic of the cars drastically. The simulators can't replicate this effectively.


Self driving hours having a lower incident rate than human drivers. That’s the minimum. Having a 99.9x% success rate (put as many 9s as you desire) is your qualifier and that’s a standard measure in operational uptime.


I've heard the opposite of this in that self driving cars underperform compared to human drivers across the board. The sample size of self driving cars doing everything a driver does is also miniscule. Do you have a source?


So far Google's self-driving cars have vastly fewer accidents than humans have per mile. Or what kind of performance are you interested in? I am sure, they could also be made to drive faster than humans and still be safer on average thanks to superior reflexes and foresight.

But what do you mean by 'across the board'?

As far sources, a web search gives many articles about safety of self driving cars. See eg https://www.wsj.com/articles/self-driving-cars-could-save-ma...


Sorry for the delay. I don't get notifications. Not sure how people are so active on here with communication. Maybe you or someone could recommend a way?

Anyways:

> But what do you mean by 'across the board'?

By that I mean across all of the people driving in the US and their rate of incidence. For example, average miles driven, the amount of drivers and the rate of accidents. I think the most intriguing detail could be drawn from the rate of fatal accidents, since that's the most concerning, ignoring accidents that cause a casualty as I don't know the method for gathering that data off hand. One could glean a lot of info from that. Here's some rough numbers I gathered, and please forgive the naive approach to my data gathering to express a point:

Average miles driven/person[0]: 13,000 Average fatalities/year[1]: 37,000 Approximate number of licensed drivers[2]: 231,652,000

I don't have numbers for self-driving cars and the number of accidents, but regardless, would it perform the same with the same number of miles driven per car. Keep in mind that self-driving cars currently aren't navigating in all circumstances and will beep to make the human take control again. At least with Tesla.

[0] In 2019, there were almost 229 million licensed drivers in the United States. (Source: https://www.asirt.org/safe-travel/road-safety-facts/) [1] Over 37,000 Americans die in automobile crashes per year. More than 90 people die in accidents every day. (Source: https://www.thewanderingrv.com/car-accident-statistics/) [2] https://hedgescompany.com/blog/2018/10/number-of-licensed-dr...


Just to be clear: when I say self-driving, I mean whatever Waymo is doing. Tesla has assisted driving at best at the moment. Waymo is aiming for true self-driving.

Looking at fatalities would make the analysis easier, but when I last checked, Waymo hadn't driven enough miles to make a good comparison possible on that metric.

(They haven't killed anyone yet, but neither would the average human driver have done so, yet.)

So we would need to look at less dramatic accidents.


Yes I think what you're saying is accurate. My apprehension is the lack of concrete data we have for comparison at the time to whole heartedly put my life in the hands of engineers, for this particular thing. I do, however, look forward to a well vetted, tested and regulated, automated driving future.


You are probably comparing accidents of self-driving cars in optimal conditions (since they cannot even drive in any other conditions lol) to all accidents in all conditions in human drivers.


Humans get into plenty of accidents under clear skies, too.

See https://www.forbes.com/sites/bradtempleton/2020/10/30/waymo-... which tries somewhat to correct for conditions.


I think there are some potential flaws with the “per mile” or “disengagement” metrics.[1] It feels very much like using LOC as a measure of software quality. Sure, it’s a metric but probably not a very good proxy for what we’re after.

[1] https://www.automotivetestingtechnologyinternational.com/ind...


GP doesn't seem to be saying that chess and brick-laying are equally difficult to automate. GP is saying that a belief in brick-laying AI represents no more contempt towards brick-layers than a belief in chess AI represents towards chess players.


> We are already starting to see automated programming moving out of the research lab and into the commercial realm

Every step in programming tooling since programs stopped being input as manual hardware configuration has been automation of programming, and its just made more work, and higher paying work, for programmers.


The main reason there's more programming work, is because machines are capable to do more(connectivity, mobility, ux, marketing, etc).


What do you think enabled these things?


Advances in hardware and software.


Not the same programmers though nor the same job though. The job of programmer-by-plugging-in-wires was still eliminated.


> Not the same programmers though nor the same job though.

The details of job changed, as more low level pieces of it were automated leaving the higher-level, more abstract bits, but for most of the changes, while some either bailed for other work or rode the dying embers of the old way to retirement, programmers generally adapted.


Code has been getting automated since basically day one of programming.

The funny thing about automation in programming - it has always opened up more doors - and led to more employment in programming.

We are so far from anything that resembles real automated coding - that coding automation should continue to be celebrated by engineers for a long time.

GitHub Co-Pilot and VS Code aren't going to replace you - they're just going to let your company offer a better product, release a new version sooner, test different versions, etc.


Right. A working automated bricklaying machine would be like a C compiler for brickmasons.


It doesn't perfectly translate IMHO. A bricklayer contractor that has a mostly automated machine doing 80% of the job would probably hire maybe 1 guy to get shit off the truck and clean dropped mortar. Normally he might have 2 or 3 extra guys to do some stretches or turns or something like that.


Look at houses now vs houses from 200 years ago. Modern house are much more advanced and complex. Automation such as a bricklaying machine would allow these 2 or 3 other guys to do something else that would support further complexity and advancement in house building, or anything else for that matter, just as automation doesn't cause loss of jobs in programming.


I'd rather live in a house without undefined behavior.


Do you suppose handwritten assembly had a lesser rate of unexpected or unintended behaviour?


[flagged]


Frameworks like react are a type of automation of coding. Any framework. Or any higher level language. Anything that isn't just entering ones and zeroes is already a level of automation in coding.


As long as you never learn a second language.


Automated tools will lower the difficulty of coding to the point where it'll be easy to pick up and be massively productive. Developers will be replaced by domain experts becoming productive developers easily with automated tools.


There's an assumption that great coders eventually write themselves out of a job. It's only half true. The reality is, they write themselves into a better job. Because the more divorced these "domain experts" become from the underlying processes and scalable systems that back their GUI-based decisions, the more of a technological elite the coders are who can delve into a mess of hardware and software stacks and explain or fix it when something goes wrong. If it used to be the height of corner-suite hubris to believe that code and coders were replaceable in building a simple app, it's now become something like magic to them that it gets done at all. And we can see in realtime how this system breaks down when there aren't enough coders at any price to fix the system. To ever get system A to write system B, someone has to write system A and then know how to fix it. You're imagining a miraculous future where system A diagnoses and repairs itself. If it could do that (although it never will), it would have long ago dispensed with useless business managers, and supposed "domain experts". Every coder is a domain expert by the time she's done writing a serious piece of business software. The execs who sit on the fragile shell of a company to both parasitically raise funds and exploit coders are the only people who hold the fantasy that one day they'll never be at the mercy of investors or coders. It's a neat way of reassuring themselves that they have value, but not much else.


Ha ha. I remember people making those claims about 4GLs and visual programming tools 30 years ago. It wasn't true then and won't be true in our lifetimes. Domain experts typically lack the mindset to think through edge cases and failure modes.


"If I had a nickle for every time..."

The closest a product that ever did this was Excel.


Didn't happen with Visual Basic, has not happened with JavaScript.

Domain experts are all using Excel, not any other automated tool.


And, honestly, Excel is a pretty good tool for that.

I just hope we could improve on it.

See eg https://www.microsoft.com/en-us/research/podcast/advancing-e...


Lowering difficulty of lifting boxes doesn't eliminate warehouse personnel: it means that now every worker can lift hundred of heavy boxes a day and the company can deliver 100x more stuff. Businesses don't want to get rid of personnel; they want to increase profit-per-employee and automation does just that.


Haskell automates a lot of programming but it sure isn't easy


Haskell is fun, but it isn't special in this regard.

Any language any human would touch these days automates lots and lots of things for you.

(Even Assemblers do a lot for you nowadays.)


Are we sure that the ML experts that wanted to beat real human chess players actually respected them? Wouldn't they know that at some point the enjoyment of chess itself might fade if the best in the world is a computer? Maybe chess is having a moment right now due to Netflix, perhaps maybe it's a permanent upswing. But long term, I don't know if it will remain interesting to people.


I personally doubt chess will fade. As far as I can tell, the enjoyment of high-level chess is less to do with reaching objective perfection, rather it's more the "thrill" of seeing a _human_ who has poured their life into the pursuit of improving their game. It doesn't really matter that a computer can beat them, we care about them because people like seeing other people's talents.

And on a casual level, nothing changes if the best in the world is a human or computer, so the casual player isn't particularly affected.


Dunno, but people are still into competitive weightlifting even though the forklift has been around for over a hundred years.


Chess is now more popular than ever before.

Not sure how important Netflix is here? There are lots of chess players on YouTube, too.

Go also get much more popular in the West for a while when Alphago came along.


Programming will be automated long before brick laying. We can barely print a document with confidence.


Programming is the conversion of ambiguous human requirements into machine readable instructions. To automate, computers would need to understand ambiguous human requirements. This is AGI.

By comparison, brick laying is trivial.


Mix mortar to the consistency of mashed potatoes and apply gently but thoroughly to two faces of a heavy, but brittle object, then place that brick into a corner with both faces with applied mortar of even thickness touching receiving faces at the same time, no lateral sliding. Press the brick gently into the corner and tap, making sure that it lines up perfectly vertically with a string line and horizontally with the other bricks, then wipe off excess mortar, creating a visually appealing groove, all while standing on a muddy slope. Best of luck, robots.


https://www.fbr.com.au/view/hadrian-x

Brick laying robots already exist. This isn't a problem that can't be solved, it's a question of making it economically viable.

But the future is pretty obviously going to be skilled machinery operators overseeing the automation.

Even more interesting its once you automate like this the constraints start changing: i.e. it's easier to have a robot lay bricks with epoxy then cement, whereas a non automated work flow would struggle.

There's a Perth based company which has a prototype which basically will layout an entire house on a concrete slab via a boom arm that uses this approach: pallets go in, structured bricks come out.


Yes, that system is discussed in the article, and it looks like it doesn't work very well. I doubt it can keep up with human labor.

I agree. It will happen eventually, but the bigger point here that is more related to to discussions on HN is that despite a lot of enthusiasm, machine learning and AI are still in the amino acid phase of evolution, and everything still sucks.


It doesn't have to keep up with human labor, it just has to reduce the total amount of human labor. And slow today doesn't mean slow tomorrow. The residential dishwasher is a great example of this.


I mean once the boston dynamics stuff gets good enough… it will. To be frank, I want my Jetsons robot maid.


> Yes, that system is discussed in the article, and it looks like it doesn't work very well. I doubt it can keep up with human labor.

It works poorly, therefore bricklaying has been automated. The programmers won. The bricklayers won too. It's only Grakel with his weird bet that lost.


You can mention these same difficulty of tasks for soda can manufacture or a dozen other how its made videos. And yet, a lot of those processes that are just as complicated as brick laying, have been automated by robots.


Key difference: you deploy a soda can robot in a factory. You have an enclosed environment with all of your inputs nicely set up.

For robot bricklaying, you can depend on the electrical supply. Everything else is a toss-up. I do think it's possible, but you don't choose your working environment or the weather conditions so everything is gonna be a lot more hassle.


And then it starts to rain.


This plays exactly into the OP's comment - but if that is your sincerely held belief there are ~4,000 bricklayers~[1] in America today and you could make a whole boatload of money if you could automate their work.

1. According to the BLS link below it's actually closer to 50,000.


> there are 4,000 bricklayers in America today

There are over 50,000 bricklayers in the US according to the BLS[1].

[1] https://www.bls.gov/oes/current/oes472021.htm


Oh sorry - google apparently failed me - thanks for the correction!


No one is even thinking about trying to completely automate coding.

There's tons of "no code" solutions - but these are all just different versions of coding and programming languages - usually with GUIs and pictures instead of words.

There's more than 60,000 SWEs that work in my company. Amazon, Google, Apple, and MSFT could make >$10Bn/year each if they could automate coding.

The fact that none of them are even trying - when it sort of goes with their core businesses - should give you some indication that this is something unlikely to be automated any time soon.

The reason bricklaying ISN'T automated is probably because there's ONLY 59,000 bricklayers in the US (a $3B market), it's not generalizable, and even if it was - the cost to move a machine, set it up, and maintain it - it's hard to imagine massive cost savings - 50% seems generous.

If there were >1M bricklayers - and/or they made >$400k/year on average, the work was generalizable, and automating would bring huge cost-savings that could be captured - there would be a LOT more effort into automating bricklaying.

But none of these are true. How much of brick laying is generalizable (building a new house) vs custom (repairing some old wall with non-standard bricks)? I have no idea - but I'm guessing not a lot more than 50%. That's a $1.5B market. You'd be luck to cut the costs by 50% - that's maybe $750M.

It could easily cost more than than to automate bricklaying! Why even try?? No one is interested in investing in that risk / reward.

On the contrary - most of Radiology could be automated and most of it is generalizable. Since hospitals are monopolies and healthcare is a mess - you might be able to capture all the cost savings - which would be close to 100%!

Since there's ~35k Radiologists, and they're some of the highest paid workers in the world - there are a lot of efforts to actually automate this (and they're doing quite well).

If you think automating radiology is easier than building a hamburger-cooking robot, you're naive. If you think AGI is easier to achieve than building a tomato-harvesting machine, you're clueless.

There's just simply not hundreds of billions to be made automating cooking hamburgers and harvesting tomatoes more efficiently. And it's not easy to generalize and automate cooking or harvesting EVERYTHING. And even if there was, restaurants and groceries are commodities - not monopolies. You couldn't capture all the savings. A race to the bottom on prices would eventually just pass the savings on to the customer - not juice profits. That's not something you want to spend massive, risky R&D on. That's why we haven't automated cooking hamburgers and harvesting tomatoes. Not because it's harder than protein folding, fusion energy, or true AGI...

From another point of view - we've had machines that mop floors for decades - and there's still a lot of people employed to mop floors. Is this because mopping floors is incredibly complex and creative? No - it's because people who mop floors get paid minimum wage, and they do a lot of other things, too.


No code still has a bit to go. It's main focus right now is having a model and can generate a UI and have a method to update a database. When you get into the weeds of what companies want, it's that person in group A is allowed to update, group B can create things, and group C can only view. And on top of that there is private stuff that only the same user can see. It's that shit that makes no-code a non-starter for Google, Amazon, ...

Perhaps a company that wants to display the current Bitcoin price on a screen and let you do currency conversions you can do that in "no code", but then again, a programmer can also do that with code in 15 minutes...


I imagine this goes for most skilled trades jobs. The lack of generalizability becomes apparent every time I undertake a DIY project that doesn’t go as planned


ONet claims nearly 82k brick masons

https://www.onetonline.org/link/summary/47-2021.00


A little bit of tongue in cheek:

As a tradesperson myself, working in a highly automated part of the metal fabrication process, I feel an urge to say something like:

The only thing standing between a programmer and unemployment is a sufficiently advanced compiler

But only because I great contempt for the hubris on display in these sorts of threads.

At the end of the day though, it's a-little-bit-form-column-A-and-a-little-bit-from-column-B.

We went from not-flying to landing robots on another planet in a handful of decades, who knows what a little bit more processing power and a few more layers of abstraction could bring.


Printing is hard primarily because of misaligned business incentives, which form a bubble of suck that's eerily resilient to market optimization.

Automating programming definitely seems like an easier target in comparison.


Printing works really well. It's print drivers (software) that are an utter and complete mess.

I've never had a printer that didn't work: I've had plenty that wanted the correct offering to the HP website to be made and several hundred megs of adware installed before a postscript file made it to the actual hardware.


Nonsense. If you could automate programming, it would be the last thing to truly be automated before the singularity. After all, if programming was truly automated, the program for creating a brick laying robot would write itself.


It's no different from people saying the same thing about chess and chess players, or go and go players...

Well, car assembling plants are older than Deep Blue, IIRC and they are very complex and "very robotic" in a physical sense.


Actually it is realy different since games like chess or go are closed problems, i.e. they are deterministic. The number of parameters necessary to integrate to turn an ostensibly open real world problem like "driving" into a deterministic game like computer chess or go is far greater.


Is a computer iterating through every possible chess move "skill" or just being able to compute the best odds?


Is this any different from what a human is doing?

I would also say that the approaches taken by something like AlphaGo are more interesting, too. Because unlike chess the solution space for Go is too large to simply look at all possible moves.


a computer iterating through things isn't skill per se, TO ME, because it's just what it was built to do. No more skill involved than a machine cutting wood. A human doing this is a skill as they have honed that ability with hours upon hours of studying and reading and playing etc. A computer can do this from the first time the devs get the bugs worked out.

Is that the same?


Where's the robot that can actually pick up and move pieces on standard chessboard, with no special guidelines or restrictions on pieces?


Seriously?

There's lots of them on YouTube. Here's one with two different robots.

https://www.youtube.com/watch?v=65YDAXfSAWw


Nope. Those are pre-programmed to know "the board is here", "the pieces start here". Give them a random (but still legal) position, ask them to make the best legal move. See if they can even find a piece

Quick test: replace any pawn with a queen. See if the bot notices.


There are plenty of other videos that demonstrate computer vision detecting chess pieces.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: