I do not like the word “my” anywhere in Human-Computer Interaction (HCI). Putting on my autistic , very factual, and methodologically empathetic hat on, I prefer a clear line of separation—machines should act as machines, not as personalized companions. I prefer “your” everywhere.
I wanted to do research in HCI a while back, but funding in this area is limited. To me, HCI research felt overly focused on making computer interaction more personable by adding layers of so-called "personalization." Let interaction with machines remain objective, straightforward, and friendly—especially for older people.
This is similar to why I prefer LLM's to behave less human-like and more robotic and machine-like, because they're not humans or human-like, they are robotic and machine-like. The chatbot is not my friend and it can't be my friend, so it shouldn't behave like its trying to be my friend. It should answer my queries and requests with machine-like no-nonsense precision and accuracy, not try to make an emotional connection. Its a tool, not a person.
With some effort, you can train yourself to respond to "You are absolutely right" with being offended at the attempt to manipulate.
It's good training and has been since long before the AIs came along. For instance, the correct emotional response to a highly attractive man/woman on a billboard pitching some product, regardless of your opinions on the various complicated issues that may arise in such a situation, is to be offended that someone is trying to manipulate you through your basic human impulses. The end goal here isn't even the offendedness itself, but to block out as much as is possible the effects of the manipulation. It may not be completely possible, but then, it doesn't need to be, and I'm not averse to a bit of overcompensation here anyhow.
Whether LLMs actually took this up a notch I'd have to think about, but they certainly blindsided a lot of people who had not yet developed defenses against a highly conversational, highly personalized boot licking. Up to this point, the mass media blasted out all sorts of boot licking and chain-yanking and instinct manipulation of every kind they could think of, but the personalization was mostly limited to maybe printing your name on the flyer in your mailbox, and our brains could tell it wasn't actually a conversation we were in. LLMs can tell you exactly how wonderful you personally are.
Best get these defenses in place now. We're single-digit years at best away from LLMs personalizing all kinds of ads to this degree.
Back in the early 2000s, there were gaming magazines — notably Incite and PC Accelerator — that tried to inject "babes" and other lad mag content into a publication ostensibly about video games. I sniffed this out for the pandering it was. Not only was it needless noise, but it detracted from the video game content. In the 2000s, gaming was largely done by kids and young adults with not much money, who needed guidance on which games to buy since they couldn't afford to get very many. So some semblance of detailed evaluation and a critical eye were necessary, even if gaming mags were nowhere near objective even way back when. Making your entire magazine look like an energy drink ad, with tits splashed on every other page, meant you weren't even pretending to take your ostensible subject matter seriously.
My favorite reply is something like: „You’re The Real GOAT!!! And now let’s just quickly clarify some minor points”, followed by a complete destruction of my arguments :).
For the sake of argument -- if you were talking about your real desk to someone, you would say "my desk", no? If you were talking about a document somewhere in your files, you would say "it's in my files". If you were forced to physically label a drawer of your personal documents either "my documents" or "your documents", I think it's safe to say "my" is the more intuitive choice there.
To me, "your" violates the human-machine boundary more than "my" in many circumstances because it implies the machine is its own autonomous being that has its own "my". No, the computer isn't giving me anything; I own the computer, and I own the files, there is no external exchange here.
(all that isn't to say there aren't plenty of cases where "your" makes more sense -- more than where "my" makes sense, by my reckoning, considering how often there is an external exchange of some sort going on. But "your" isn't a one-size-fits-all solution)
The example is bloated UI to begin with. It should just be a checkbox with the label: "Share your profile photo".
This is going on a tangent now, but making things more clear and concise allows more options to fit on one screen which also reduces the need for endless submenus. This is a better experience because the user doesn't have to remember where the option is if they're all on one screen anyway, yet still broken up under subheadings.
Edit: As I stand massively downvoted at this point in time despite my comment being entirely factually correct, I invite any potential downvoter to consider the sentence “Give me apple” before reaching for the button.
> The closest analogous sentence would be "Give apple", which works perfectly well as a choice to select in a textual medium.
Definitely no, "Give apple" is baby talk. Completely unacceptable in a choice. That's not proper English. I will die on that hill.
I'm actually shocked by the amount of people here who thinks it's acceptable and fine.
> Those are not analogous. You have added a direct object without preposition, which is not standard usage in such contexts.
The "apple" in "give apple" is a direct object without preposition. It's entirely analogous to what I wrote. Are you confused by the "me" in my sentence. "Me" is an indirect object here.
We basically have the same sentence. It just became entirely obvious that omitting the article is erroneous as soon as you had an indirect object. It's equally erroneous without it but apparently people have somehow convinced themselves it is acceptable after years of misuse in poor computer interfaces.
There is no officially sanctioned authority specifying the English language so "proper English" is not a defined concept in any way or form. You can choose to die on that hill, but you're fighting a war that doesn't even have defined sides.
> Do you actually think this is an unacceptable and grammatically incorrect way of phrasing these provided options?
Yes, I do.
That’s Sierra-like poorly phrased English to save characters in a constrained support. Completely incorrect in any context, inacceptable when you don’t have to save bits.
It’s only somewhat understandable because the zero article is used with proper name. Actually I find it interesting that you found the need to capitalise.
Well, then you are at odds with the vast majority of English-speakers, and will just have to come to terms with the fact that the language is moving on without you.
Telegraphic style is not grammatically incorrect, it’s a feature of instructional English.
Consider “insert nut into bolt”, “slice onion thinly”, or “sprinkle vinegar over chips”.
I agree that your counter example does not work, but that’s due to the ambiguity introduced by having both an indirect and direct object. In a list of short instructions, “give apple to me” would not be ungrammatical.
Sadly that is factually correct and none of the links in your reply actually supports your point.
The rule about the zero article doesn't list the case of a noun after an imperative.
The first link is about the subject, not the object and the third is about negative imperative. Why are you posting links about completely unrelated things?
Once again, using a noun without an article this way is gramaticaly incorrect.
"Share profile photo" would be grammatically incorrect as a complete sentence.
But it's perfectly grammatically correct as a command label.
English has different grammar rules in different contexts. For example, newspaper headlines omit articles all the time. That doesn't make the NYT grammatically incorrect on every page, though. Because they're using correct headline grammar, which is different from sentence grammar.
That's commonly called Grandma's rules, sometimes shortened to gram's rules. I've never seen the spelling "grammer" before, even though gram'r is arguably more correct than gram's.
> But it's perfectly grammatically correct as a command label.
Agree to disagree. The reason it sounds robotic is because it's grammaticaly incorrect. The article is not optional before the object in this sentence.
The 2nd and 3rd examples are plural. You don't need an article for plural nouns. "Fix bayonets." and "Fix the bayonet." are standard grammar. "Fix bayonet." isn't.
Well, hands up is lacking a verb, and fix bayonets is in a funny passive tense - or something - because it seems to say "generally go around looking for bayonets to fix", but means specifically "fix your bayonets". In fact hands up is like that too, the intent is "put your hands up", not just "put hands up" in the abstract.
Then there's informational signs, too. Wet floor is not an instruction. Labels generally aren't sentences.
Or instructions on signs: ring bell for assistance, return tray to counter, close gate after use.
> Or instructions on signs: ring bell for assistance, return tray to counter, close gate after use.
I have never seen this.
I have seen plenty of "Please close the gate" or "Keep the gate closed". Sometimes, the article is eluded when the noun is subject "Gate must be kept closed" but imperative + noun without an article on a sign seem highly unusual to me. It feels weird so I would definitely notice.
I have seen "ring bell for assistance" however. It's jarring everytime. I must be the strange one.
This kind of phrasing is so common (in American English directions) that I remember examples from when I was very young:
(on toothpaste) "Squeeze tube from the bottom and flatten it as you go up."
(on a kerosene heater) "Rotate wick adjuster knob clockwise until it stops."
Australians tend to prefer more conversationally phrased directions from what I've seen, e.g., the rail station signs that read "Keep off the tracks and use the walkways provided to cross. Or catch a $100 fine. Don't say we didn't warn you, mate!"
I'd go for "Share profile photo" for the checkbox. Why even get into ownership of the photo? Maybe it's not mine and was given to me by whoever took the photo? Just keep it simple and stop pretending that my OS is alive.
Overly-anthropomorphised dialog boxes (such as pop-up offers on web sites, not so much on operating system controls) bug me in the same way. Instead of "Yes, please" and "No, thank you" buttons, I would prefer simply "Yes" and "No". I'm giving orders to a machine not talking to a person!
The one I hate is the error message that simply says "Something went wrong." maybe with a frowning cat icon, but with no other diagnostic message that could be used to determined what exactly went wrong and what corrective action to take.
This annoys me so much, and it's another reason I hate phone apps, because they do this all the time. Usually ANY error resolves to "something went wrong". I'm not expecting a stack trace, but they're too scared to show the user ANY tech jargon at all, and it's another reason why young people are computer illiterate. At least I can access the developer console on modern webshit when using an actual computer.
I had to logcat an app recently which failed with no error at all incidentally, to find out it was overzealous DNS blocking that prevented it from talking to its api endpoint. I don't to Android development, but I'm guessing apps would be aware of name resolution failures, and should be able to tell the user about it, without using fucking logcat.
> error message that simply says "Something went wrong."
Actually, are there HCI guidelines for communicating inexplicable internal errors to the user? I definitely write assertions that really should never ever fail - if they do, we are in a completely unanticipated state. Either there's been a truly massive logic bug, or maybe even a memory error flipped a bit, but in either case, I have no idea what state the program is in or what caused it to get there.
What would a good tech writer tell the user in this situation? I can't think of anything all that much more helpful than "something went wrong". Maybe "There is a serious bug in the program, totally our fault, please help us by reporting it"?
I'm not a user, but to me the problem with the empty "something went wrong" is not that it that it obscures the error details but that it obscures the failed action. What exactly went wrong? Should I retry my last action? Is my data safe? Is it safe to close the program/app without saving?
If the user is to report a bug, then any additional information would be better than "Something Went Wrong." "Something Went Wrong" is the equivalent of the guy who calls into the IT helpdesk and says "My computer isn't working."
Surely, somewhere in the code, there is an if() statement, and you're displaying the "Something Went Wrong" dialog in the else() clause. You could at least add some context that the user can copy down, so that the bug report that will come later helps you find the bug.
Steve Summit liked to tell the story of an early Mac application with a distinct UI flourish: for dialog messages indicating success, the label on the button to dismiss the dialog would be changed to "Yay!"; for error messages, it would be changed to "Damn!".
Just another item on the long list of Things Done in the 80s That We Couldn't Get Away With Today.
Back when Microsoft released Windows 98, they completely revamped the Explorer UI (made it shittier imho), among other things making folders open with a single click instead of the previous double click. This, their marketing department said, was to make your local computer look and behave more like the Web, and thus be more familiar. The theme of Windows 98 was an OS built for the Web, with smooth integration between local and Web resources.
I was like NO!!! YOU DO NOT WANT THIS!!! The difference between your local computer and the Web is like the difference between your house and St. Charles Avenue in New Orleans during Carnival parade season. My wife may feel "at home" in both, but she stands a good chance of being pickpocketed in one environment; the other, much less so.
I'm with you. We should emphasize a bright-line distinction between interaction with machines and interaction with people. "My Computer" in Windows 9x is okay to me, especially in light of the above; you WANT people to recognize the difference between "my computer" and "someone else's computer". But messages like "Please wait while we set things up" in recent Windows piss me off. What is this "we" shit, kemosabe? Who are you and what are you doing messing around in my computer?
I would claim "Your" doesn't belong either. :) UI should be entirely passively describing things to the user only. Same for technical documentation. E.g. just describe what an option does, don't tell the user what they can or not can do.
I applied for two separate roles around 2–3 years ago. I’ve always really liked your product, and I enjoyed talking to one of your developers. However, since your team was primarily used Ruby at the time and that’s not my main language it wasn't the best fit back then.
That said, I’m still very interested and was wondering if there might be any opportunities for a technically hybrid roles (Developer Advocate, Customer Success Engineering, Support Engineering) on a part-time basis.
To be upfront: I currently have a full-time position, but I’m looking to contribute during weekends and around 2–3 hours on weekdays, totaling around 20–25 hours per week.
I wish there were a somewhat acceptable, though controversial, way for us to distinguish between good and evil like how success is defined by disposable wealth. You can argue that society does not see it that, but there is no absolute way to denying it.
> way for us to distinguish between good and evil like how success is defined by disposable wealth
Above a certain point, disposable wealth turns very readily to evil if it's not accompanied by social responsibility, a point made by some extremely woke dude in the Bible.
I’d say that those claiming it’s a simple or classic strategy have very little idea of how stock exchanges operate in second and third-world countries. Getting permission to trade as a foreign institutional investor requires a significant amount of legal work and, umm, bureaucratic investment. Almost all stock exchanges out there use off-the-shelf trade surveillance software, which means the exchange will flag this, and so will the SEC-equivalents, on every trade they make. There’s also a proactive element to this in the form of writing reports and asking for explanations regarding the trades. There’s no way these trades happen without someone noticing.
The thing is, Jane Street still consists of some of the smartest people in the room. Getting into markets like this and making large-volume trades is no easy feat. We often equate algorithmic prowess with investment intelligence, but in reality, navigating the legal and regulatory requirements is the only edge you have in trading these days. It’s very hard to figure this out as an international firm. Jane Street did it, and they deserve kudos for it. Trust me, if it were an Indian firm making the same moves, you wouldn’t have heard about it.
You’ll see Jane Street will pay a fine and come out on top. This is because they plan for these things with the expectation that regulators will make a scene about it.
That’s such a STEM thing to say. In finance, this is considered a badge of honor.
Your genius physicist hedge fund operator who "broke the game" in the investment world ended up paying $7 billion in a tax settlement [0]. There’s no algorithm—it’s all about regulations and manipulation. In the investment world, fines, "bureaucratic investments," and similar costs are just operating expenses. Guess what? Everything is a line item in a financial report.
You hire STEM researchers from Ivy League schools—why? You hire these guys, not just because they’re smart, but because they come with "back home connections" —maybe a multi-millionaire businessman for a father or a politician for an uncle. Nobody makes it to these colleges or financial institutions by merit alone. You need those connections. That’s entirely how finance works.
thanks for the your perspective on this ! It is somewhat depressing to view it in that light, but thats the way it works. Being in STEM, and not in finance, I thought reputation and trust matter in banking and finance based on articles like this [1].
I wanted to do research in HCI a while back, but funding in this area is limited. To me, HCI research felt overly focused on making computer interaction more personable by adding layers of so-called "personalization." Let interaction with machines remain objective, straightforward, and friendly—especially for older people.