I was pleasantly surprised with how well Copilot picked it up. Civet doesn't have that many truly new language features, most of them are from existing languages and used in a similar way. Copilot is really good at matching what you are doing near the completion so I was impressed with how well it did with a new language.
The built in browser debugger is incredibly good. As long as the transpilation is simple and matches JS semantics you can still use the debugger. I haven't seen good debugging tools when using languages more distant from JS but I'd love to know if they've become viable.
You've been able to debug TypeScript (and anything else that transpiles to JS) natively in the browser for years using sourcemaps. That includes Dart, C#/F#, Go (as far as I know), and Python.
For the languages that target wasm instead, there are different debugging stories. Kotlin's is very good, Rust's is pretty immature.
Source maps are not sufficient to fully expose the semantics of the language being debugged to the debugger. You also need the expression evaluator (for stuff like Watch etc) to understand what it's debugging. And in cases where transpilation includes nontrivial mapping of data structures, you also need the ability to do that mapping in reverse to display the values.
Source maps work great for TS because it is just "JavaScript with types" at this point.
Something interesting I've found while designing Civet is that TypeScript actually mitigates in a lot of the downsides of CoffeeScript.
Types help quite a bit with implicit returns so you don't accidentally return an iteration results array from a void function.
They also help reduce the downsides of terse syntax, just hover over things in the IDE and see what they are. Missed a step in a pipeline? The IDE will warn you if there's a mismatch.
Not OP but I could see some shops pushing AI generated code to production, then when changes need to be made, they can't get the AI to modify the existing code in just the way they need, so a human has to intervene.
I can't get Copilot to generate Python that adds numbers together correctly sometimes. Getting an LLM to generate correct, working code for a language that hardly anybody writes anymore is almost assuredly going to lead to failure.
The slope doesn't really matter, because the target is "better than a human, and able to identify and fix its own errors". The slope will decrease as you approach this threshold.
It's also wildly bad to plan to train and fine tune on code that you know has bugs. Already we have Copilot generating code with trivial vulnerabilities because that's what it's trained on.