It's also a huge danger as the system FB uses to tag and categorize photos is clearly flawed. example: Meta took a business page I ran that had over 150K followers offline because of a photo that violated their 'strict anti-pornography' etc etc policies. The picture was of a planet - Saturn - and it took weeks of the most god-awful to and fro with (mostly) bots to get them to revoke the ban - their argument was that the planet was 'flesh-toned' and that their A.I. could not tell that was not actually skin. The image was from NASA via a stock library and labelled as such.
Google had banned (years ago) my secondary Google a/c that at best I used once in a few months - never even browed from a browser with that a/c logged in, never ever used it for anything other than Gmail - I doubt YT etc was even activated on that. The reason given was a kind of porn that I can't bring myself to type the name of. I didn't even think of appealing - I was so fucking scared and ashamed without ever indulging in that.
But that was when I bought my domain and mail hosting service and few months later I had moved my email to my domain almost everywhere.
Years later Google also killed my primary Gmail (i.e what was primary email earlier) Google Play a/c (for lack of use; true I had never published an app) and didn't refund the $25 USD even though I had finished all the tasks needed to keep the a/c alive 3 days before deadline and I had also requested them to tell me "how to add the bank a/c" to get the refund (asked at least 5 times over a span of 40 days) - because they kept telling me "add the bank a/c for refund" and never telling me "how" or sharing an article or page that told me how. I could never find out how.
They kept the $25 - not even appeals were allowed/entertained. I got "final.. no further response" and that was it, literally no further response on it.
I stop to think sometimes why.. just why we gave these trillion dollar companies this much power - the likes of Apple, Google, AMZN, Meta, MSFT.. why?? Now we literally can't fight them - not legally, not with anything else. It seems we just can't.
> They kept the $25 - not even appeals were allowed/entertained. I got "final.. no further response" and that was it, literally no further response on it.
It's the kind of thing I'd send to the small claims court out of spite.
One reads completely ridiculous cases like the one you describe, and shakes their head at those who preach the notion of creating ever more thickets of AI "powered" bots as a prima facie interface for our social services, customer support and other institutional interaction needs.
Idiocies like this are why AI should absolutely never (at least at any present level of technology) be an inescapable means of filtering how a human is responded to with any complaint. Truly, fuck the mentality of those who want to cram this tendency down the public's throat. Though it sadly won't happen thanks to sheer corporate growth inertia, companies that do push such things should be punished into oblivion by the market.
I worked on a project where one of the services was a model that decided whether to pay a medical bill.
Before you start justified screams of horror, let me explain the simple honesty trick that ensured proper ethics, though I guess at cost of profit unacceptable to some corporations:
The model could only decide between auto approving a repayment, or refer the bill to existing human staff. The entire idea was that the obvious cases will be auto approved, and anything more complex would follow the existing practice.
Mmmmhm, which means the humans now understand that they should be callous and cold. If they're not rubber stamping rejections all the time then the AI isn't doing anything useful by making a feed of easy-to-reject applications.
The system will become evil even if it has humans in it because they have been given no power to resist the incentives
All you have to do is take an initial cost hit where you have multiple support staff review a case as a calibration phase and generate cohorts of say 3 reviews where 2 have the desired denial rate and 1 doesn't. Determine the performance of each cohort by how much in agreement they are and then rotate out whose in training over time and you'll achieve a target denial rate.
There will always be people who "try to do their best" and actually read the case and decide accordingly. But you can drown them out with malleable people who come to understand if they deny 100 cases today then they're getting a cash bonus for alignment (with the other guy mashing deny 100 times).
Technology solves technological problems. It does not solve societal ones.
I am not disagreeing, and I am not arguing for AI.
I am just saying that the perverse incentives already exist and that in this case AI-assisted evaluation (which defers to a human when uncertain) is not going to make it any better, but it is not going to make it any worse.
Actually it may, even if only slightly. Because now as the GP says, the humans know the only cases they're going to get are the ones the AI suspects are not worthy. They will look more skeptically.
I totally agree that the injustices at play here are already long baked in and this is not the harbinger of doom, medical billing already sucks immense amounts of ass and this isn't changing it much? But it is changing it and worse, it's infusing the credibility of automation, even in a small way, into a system. "Our decisions are better because a computer made them" which doesn't deal at all with how we don't fully understand how these systems work or what their reasoning is for any particular claim.
Insofar as we must have profit-generating investment funds masquerading as healthcare providers, I don't think it's asking a ton that they be made to continue employing people to handle claims, and customer service for that matter. They're already some of the most profitable corporations on the planet, are costs really needing cutting here?
>"Our decisions are better because a computer made them"
This is the root of the problem, and it is (relatively) easy to solve: make any decision taken by the computer directly attributed to the CEO. Let them have some Skin in The Game, it should be more than enough to align the risk and the rewards.
Actually the real issue for the humans was that it would mean possible reduction in employment which is why we had union block deployment for a time until a deal was brokered.
It helps, as you can suspect from "union" comment, that it wasn't an american health care insurance company.
How hard would it be tweak that model so that it decides between auto-paying and sending it to a different bot that hallucinates reasons to deny the claim? Eventually some super smart MBA will propose this innovative AI-first strategy that will boost profits.
Funny enough, the large AI companies run by CEOs with MBAs (Alphabet and MSFT), seem to be slow-playing AI. The ones promising the most (Meta, Tesla, OpenAI, Nvidia) are led by strict technologists.
Maybe it’s time to adjust your internal “MBAs are evil” bias for something more dynamic.
They are slow-playing the promise of what AI can, should, and will accomplish for us.
Nadella said this yesterday at YC’s AI Startup School:
== “The real test of AI,” Nadella said, “is whether it can help solve everyday problems — like making healthcare, education, and paperwork faster and more efficient.”
“If you’re going to use energy, you better have social permission to use it,” he said. “We just can’t consume energy unless we are creating social and economic value.”==
Thanks. I agree w the things Nadella said there. But it rings pretty hollow, given how hard every MSFT product is pushing AI. What would it look like if they weren't "slow-playing" it?
That’s fair. I was looking more at the promises of what it can/will do than integrating it into products. The MBA CEOs seem more focused on solving business problems and the tech CEOs are more focused on changing the world.
This is all an aside from the original point, which was that I think it is unfair to pin the proliferation and promises made about AI on some cabal of MBAs somehow forcing it. The people building the tools are just as at fault, if not more.
Right, I can't sustain for a moment the idea that the guy who fumbled Recall like a stack of wet fish dipped in baby oil is actually a wise sage full of caution. I permit myself one foolish idea a day and that's not going to be the one for any day of the week.