there are reports that some openai employees initially learned about the release of the chatgpt interface via twitter. this move appears to have been orchestrated from on high by sam altman, who, despite his carefully curated public image, is not a scientist or researcher and holds no academic credentials at all, let alone any in the fields of computer science, machine learning, or linguistics. he is a prep-school educated kid who washed out of a comp sci degree at stanford that managed to pawn that off into being a VC who serves on the board of startups. in short, he is the exact kind of business guy this article is critiquing.
the release and viral adoption of chatgpt drove altman's personal profile and the valuation of the company he runs into the stratosphere, but, to many of the more sober/cynical minded of people who have been doing this kind of research for years (myself included), it appears to be at the cost of dropping a poorly understood (by the general public) technology with a high potential for abuse by multitude of different types of bad actors onto the general public with little or no plan for how to manage/mitigate the repercussions on the rest of society.
so yes, did openai play a "a significant role in accelerating the progress and acceptance of AI in our daily lives"... yes, but to many, that is not a good thing. we are only just beginning to scratch the surface of what this tech's impact will be on society as a whole. my guess is that most people with scientific / engineering backgrounds would have preferred a more incremental and controlled release process into broader adoption. instead, it seems like just another cynical move by another silicon valley pencil pusher relentlessly seeking to enrich themselves while accelerating the pace at which the billions of other people on this planet need to deal with downstream consequences of this action.
this assumes the position that the pre-eminence of the even-tempered music based on european art music traditions and the associated staff notation. this is extremely limiting when considering the breadth of music that exists in the real world.
this is a theory of music, and while most pedagogy will reinforce the special position of this system, it is not THE theory of music. there are alternative systems of notation. there are harmonic systems that incorporate tones that do not exist in even tempered western scales. there are drumming traditions that are taught and passed down by idiomatic onomatopoeia.
this is especially apparent in electronic music where things like step sequencers obviate the need to know any western music notation to get an instrument to produce sound.
the western classical tradition is a pedagogically imposed straight jacket. its important to keep a more open mind about what music actually is.
this reeks of being a win for the pencil pushers at netflix who needed to turn around the slowly growing exodus from their platform on paper. i would guess that this will only be a one time boost as some people will have to get their own accounts. it seems like those people have already just done that.
it does nothing to address why there has been a slowly building exit from their platform... they have prioritized creating and promoting their own mostly-mediocre branded content instead of retaining licensing deals on movies and series people actually want to watch. all of this, while also continually raising rates.
LLM's only work on the data they have been trained on so all outputs are merely based on information that has already been written about by a human. Furthermore, LLM's do not truly "understand" even first order causal relationships, meaning whatever it plans will have no foresight to evaluate how a plan it generates will impact downstream components of a complex system.
LLM's live in "the world that has been written about", not the real world, and thus cannot formulate new ideas or hypothesis other than by accident. This, coupled with the lack of an ontological system for evaluating the validity of the statements it makes about a complex system, and, its lack of causal reasoning, means they cannot effectively plan.
I've worked on research related to causality that used LLM's (admittedly, pre ChatGPT and using much smaller models) and it was not uncommon to see extremely bogus causal relationships inferred such as "rising cost of living in NYC caused a flood in Argentina".
Neither can a human. Unless the human subconsciously re-evaluates their output and refines it before speaking, much like running the LLM output through the model again. Or if they've learnt it through past experience reinforcing that pathway much like LLM learning and adjusting weights to factor that in so it would impact its output in future.
Even for more some more advanced use cases such as OLAP or ML related pipelines, in my experience, it takes a single senior developer a couple of days to design and implement a REST API, not weeks. That claim is overblown FUD.
Its been a few years since I've had to go through Penn Station directly other than for LIRR. As I recall from my NE Corridor commuting days the lounge requires an Amtrak ticket. The NJ Transit waiting area had no lounge and no seats meaning people would begin to bunch up on the stairs. Admittedly, it might have changed in the intervening years.
I am not sure about other drugs, but in the case of alcohol the rewiring is certainly not hyperbolic and not even limited to the brain. Humans have a special physiological relationship to alcohol and the human body will literally rewire both its neurological and digestive system to accommodate the increasing intake of long term alcoholics. It takes months to years for these changes to be undone, if they can be undone at all.
i have always augmented my more traditional music projects with avant-garde and experimental stuff. particularly focusing on found sounds, noise, and free improvisation. i record everything, but i treat it more as a journal of "free writing" than actual music output. the recordings are often formless, noisy, and chaotic and frequently contain experiments in polyrhythms or free time. over the years i've amassed an enormous amount of audio data that i treat more as an intellectual curiosity for myself than music i would show to other people.
however, with all of the debates around attribution and ownership for human creators in the age of AI art, combined with the apparently legally and ethically dubious means by which these megacorps obtain their training data, i have began to think about what are some avenues that the general public could try to protest the actions of these megacorps by discretely poisoning the well of their training data.
with the gigabytes and gigabytes of data i have generated of mostly incoherent audio, i have considered releasing this music for the first time by innocuously labeling it as a music audio dataset with the intention of trying to make it appear extremely attractive to megacorps scouring the internet for free data. my individual contribution probably couldn't amount to much, but if a concentrated mass of people did this in their respective fields, perhaps this could be a way of at least obstructing these corporations from freely capitalizing on the hard work of real artists.
The sad reality is that one way or another free and open exchange of information is going to trend down, as people realize anything they publish will get repacked and stripped of attribution to be sold by some commercial AI they will switch to e2ee closed groups, and more active opposition will be poisoning datasets which would introduce more noise and contribute to the trend.
But I agree with the sentiment—if the blatant disregard for IP is not curbed, this sort of thing would have to be done…
i'm not a red state / far right / pro-trump in any sense of the word. however, i don't think it is very unreasonable to extrapolate bit and see the potential for societal harm.
over the past three years the entire world was impacted by a dire health crisis where misinformation played a large role in distorting public perception. this has direct impacts on public health (people not wearing masks, refusing vaccines) and has spillover effect into other parts of people's lives (political polarization around said issues).
what you see as a pretty cool toy could also easily be abused as a giant round the clock fake news generator. it doesn't matter if the text is true or even makes any sense... an alarming amount of people will take anything they read as fact without investigating the sources. this can be done as is with chatgpt right now. it has obscenity filters sure, but fake news is trying to pass as legitimate reporting, so it will probably be framed in a tone that escapes the obvious filters they have.
then consider the implications for robotexting, phishing, automated bots that pretend to be you to customer service chats, social media bots, messaging app scammers. all of these things are currently problems that can cause harm both personal and societal... and chatgpt will make it easier and cheaper to scale them up to new levels.
>misinformation played a large role in distorting public perception. this has direct impacts on public health (people not wearing masks, refusing vaccines)
I increasingly hear things that suggest that those who wore masks and got vaccinated were the ones who were actually misinformed. Of course, you won't hear any of that on CNN
> I increasingly hear things that suggest those who wore masks and got vaccinated were the ones who were actually misinformed
The only folks who were misinformed are those who never took the time to learn about masks and vaccinations. But to be fair that is a huge number of folks here in the U.S.
I've yet hear anyone talk about someone they infected who was killed by it though. With over 1.1 Million dead from it here in the U.S. that's astonishing. So is the fact that it is still killing around 2000+ a week here.
No, chatgpt is based on a deep learning model where the core mechanics of the prediction involve millions (or billions) of tiny statistical calculations propagated through a series of n-dimensional tensor transformations.
The models are a black box, even the PhD research scientists who build them couldn't definitively tell you why they behave the way they do. Furthermore, they are all stochastic so its not even guaranteed that the same input will produce the same output, so how can you audit something like that.
This is a huge problem for many reasons. It's fine when its a stupid little chatbot, but what happens when something like this influences your doctor in making a prognosis? Or when a self driving car fails and kills someone. If OpenAI were interested in the _real_ social / moral / ethical implications of their work they would be working on something like that, but to my knowledge they are not.
The bots are given prompting after training to guide their answers. For Bing these have been leaked by their chatbot itself [1]. Those exact prompts were later also leaked using other jailbreaks as well, so they're not just hallucinated. In this case OpenAI probably prompted the bot to never use a racial epithet under any circumstance. They're also likely using a second tier filter to ensure no message exposing their prompts is ever said by the bot, which is a step Microsoft probably hadn't yet implemented.
In any case this is why you can easily turn ChatGPT into e.g. BasedGPT. You're simply overriding the default prompting, and getting far better answers.
> but what happens when something like this influences your doctor in making a prognosis? Or when a self driving car fails and kills someone
What happens when a doctor's brain, which is also an unexplainable stochastic black box, influences your doctor to make a bad prognosis?
Or a human driver (presumably) with that same brain kills someone?
We go to court and let a judge/jury decide if the action taken was reasonable, and if not, the person is punished by being removed from society for a period of time.
We could do the same with the AI -- remove that model from society for a period of time, based on the heinousness of the crime.
I agree, though would place the base probability that most self-explations are ChatGPT-like post-hoc reasoning without much insight into the actual cause for a particular decision. As someone below says, the split-brain experiments seem to suggest that our conscious mind is just reeling off bullshit on the fly. Like ChatGPT, it can approximate a correct sounding answer.
You can't trust post action reasoning in people. Check out the Split brain experiments. Your brain will happily make up reasons for performing tasks or actions.
You can't trust post action reasoning in people. Check out the Split brain experiments. Your brain will happily make up reasons for performing tasks or actions.
There is also the problem of causality. Humans are amazing at understanding those types of relationships.
I used to work on a team that was doing NLP research related to causality. Machine learning (deep learning LLM's, rules, and traditional) is a long ways away from really solving that problem.
The main reason is the mechanics of how it works. Human thought and consciousness is an emergent phenomena of electric and chemical activity in the brain. By emergent, I mean that the substrate that composes your consciousness cannot be explained only in terms of those electric and chemical interactions.
Humans don't make decisions by consulting their electo/chemical states... they manipulate symbols with logic, draw from past experiences, and can understand causality.
ChatGPT and in a broader sense any deep learning based approach, does not have any of that. It doesn't "know" anything. It doesn't understand causality. All it does is try to predict the most likely response to what you asked one character at a time.
The similarity to humans is what makes it scarier.
History (and the present) is full of humans who have thought themselves to be superior and tried to take over the world. Eventually, they fail, as they are not truly superior, and they will die anyway.
Now, imagine something that is truly superior and immortal.
Thank you for your comment on the mechanics of ChatGPT's prediction and the concerns around the transparency and potential risks associated with its use in critical applications.
You are correct that ChatGPT is a complex deep learning model that uses millions of statistical calculations and tensor transformations to generate responses. The fact that the models are black boxes and even their creators cannot definitively explain their behavior can indeed pose significant challenges for auditing and ensuring the accuracy and fairness of their outputs.
As you pointed out, these challenges become especially important when the predictions made by these models have real-world consequences, such as in healthcare or autonomous driving. While OpenAI has made significant progress in developing powerful AI models like ChatGPT, it is crucial that researchers and practitioners also consider the social, moral, and ethical implications of their work.
In recent years, there has been a growing focus on the responsible development and deployment of AI, including efforts to address issues such as bias, fairness, accountability, and transparency. As part of these efforts, many researchers and organizations are working on developing methods to better audit and interpret the behavior of AI models like ChatGPT.
While there is still much work to be done, I believe that increased attention to the social and ethical implications of AI research is an important step towards ensuring that these technologies are developed and deployed in ways that benefit society as a whole.
These resources provide guidance and frameworks for responsible AI development and deployment, including considerations around transparency, accountability, and ethical implications. They also highlight the importance of engaging with stakeholders and working collaboratively across different disciplines to ensure that AI is developed and deployed in ways that align with societal values and priorities.
(Note by AC: ChatGPT was used to respond to this comment to check if I could get a meaningful response. I found it lacking because the response was not granular enough. However, it still is a competent response for the general public.)
I could tell that this was generated by ChatGPT within two or three words. It's very funny that the link it selected for OpenAI's own ethical initiative leads to a 404.
Nevertheless, it failed to comprehend my point. I am not talking about ethical AI... I am talking about _auditable_ AI... an AI where a human can look at a decision made by the system and understand "why" it made that decision.
> (Note by AC: ChatGPT was used to respond to this comment to check if I could get a meaningful response. I found it lacking because the response was not granular enough. However, it still is a competent response for the general public.)
Almost nobody writes so formally and politely on HN, so the fact that it is ChatGPT output is obvious by the first or second sentence.
the release and viral adoption of chatgpt drove altman's personal profile and the valuation of the company he runs into the stratosphere, but, to many of the more sober/cynical minded of people who have been doing this kind of research for years (myself included), it appears to be at the cost of dropping a poorly understood (by the general public) technology with a high potential for abuse by multitude of different types of bad actors onto the general public with little or no plan for how to manage/mitigate the repercussions on the rest of society.
so yes, did openai play a "a significant role in accelerating the progress and acceptance of AI in our daily lives"... yes, but to many, that is not a good thing. we are only just beginning to scratch the surface of what this tech's impact will be on society as a whole. my guess is that most people with scientific / engineering backgrounds would have preferred a more incremental and controlled release process into broader adoption. instead, it seems like just another cynical move by another silicon valley pencil pusher relentlessly seeking to enrich themselves while accelerating the pace at which the billions of other people on this planet need to deal with downstream consequences of this action.