I use Firefox Mobile Nightly on Android and appreciate it for the dark mode extension and ad blocking. There are some issues but the benefits outweigh them for me.
I don't even have a Home button that I can see, I must have turned it off in settings? I describe my tab count using scientific notation, though, so I'd be a "new tab" guy, anyway. But I'd also be a proponent of it being configurable.
Kinda related, I wish there was an easy way to exclude dependencies at pip-install time and mock them at runtime so an import doesn't cause an exception. Basically a way for me to approximate "extras" when the author isn't motivated to do it for me, even though it'd be super brittle.
This sounds doable, actually. You'd want to pre-install (say, from a local wheel) a matching dummy dependency where the metadata claims that it's the right version of whatever package (so the installer will just see that the dependency is "already satisfied" and skip it), but the actual implementation code just exposes a hook to your mocking system.
Doesn't work if version resolution decides to upgrade or downgrade your installed package, so you need to make sure the declared version is satisfactory, too.
Yes, but also "An... entity may not provide... therapy... to the public unless the therapy... services are conducted by... a licensed professional".
It's not obvious to me as a non-lawyer whether a chat history could be decided to be "therapy" in a courtroom. If so, this could count as a violation. Probably lots of law around this stuff for lawyers and doctors cornered into giving advice at parties already that might apply (e.g., maybe a disclaimer is enough to workaround the prohibition)?
Functionally, it probably amounts to two restrictions: a chatbot cannot formally diagnose & a chatbot cannot bill insurance companies for services rendered.
Most "therapy" services are not providing a diagnosis. Diagnosis comes from an evaluation before therapy starts, or sometimes not at all. (You can pay to talk to someone without a diagnosis.)
The prohibition is mainly on accepting any payment for advertised therapy service, if not following the rules of therapy (licensure, AI guidelines).
These things usually (not a lawyer tho) come down to the claims being actively made. For example "engineer" is often (typically?) a protected title but that doesn't mean you'll get in trouble for drafting up your own blueprints. Even for other people, for money. Just that you need to make it abundantly clear that you aren't a licensed engineer.
I imagine "Pay us to talk to our friendly chat bot about your problems. (This is not licensed therapy. Seek therapy instead if you feel you need it.)" would suffice.
For a long time, Mensa couldn't give people IQ scores from the tests they administered because somehow, legally, they would be acting medically. This didn't change until about 10 years ago.
Defining non-medical things as medicine and requiring approval by particular private institutions in order to do them is simply corruption. I want everybody to get therapy, but there's no difference in outcomes whether you get it from a licensed therapist using some whacked out paradigm that has no real backing, or from a priest. People need someone to talk to who doesn't have unclear motives, or any motives really, other than to help. When you hand money to a therapist, that's nearly what you get. A priest has dedicated his life to this.
The only problem with therapists in that respect is that there's an obvious economic motivation to string a patient along forever. Insurance helps that by cutting people off at a certain point, but that's pretty brutal and not motivated by concern for the patient.
If you think human therapists intentionally string patients forever, wait to see what tech people can achieve with gamified therapists literally A/B tested to string people along. Oh, and we will then blame the people for "choosing" to engage with that.
Also, the proposition is dubious, because there are waitlists for therapists. Plus, therapist can actually loose the license while the chatbot cant, no matter how bad the chatbot gets.
In another comment I wondered whether a general chatbot producing text that was later determined in a courtroom to be "therapy" would be a violation. I can read the bill that way, but IANAL.
That's an interesting question that hasn't been tested yet. I suspect we won't be able to answer the question clearly until something bad happens and people go to court (sadly.) Also IANAL.
Reading the text of the bill as a non-lawyer, it seems to also ban AI that provides therapy. I don't know if the AI needs to be explicitly labeled as therapy or if the content of chat could be decided to be therapy in a courtroom.
I think it is significant that this rambling channel supplements the yearly in-person meeting. Presumably, that's where one tends to form deeper social connections and get a feel for what different people find interesting to talk about? That is, if the team is varied enough so that there is little overlap in hobby interests or daily life.
On my Android gmail app, when I reply to an email, there's very little on the screen at the start of the process. The pink-ish send button really stands out since everything else is grey text (I'm using dark mode). They show an image after the user has composed their message and also expanded the quoted previous email text, which is not really what the user's experience is like, so it's misleading IMO.
I don't even have a Home button that I can see, I must have turned it off in settings? I describe my tab count using scientific notation, though, so I'd be a "new tab" guy, anyway. But I'd also be a proponent of it being configurable.