Disclaimer: I'm in the mobile gaming industry for last ~5 years, I'm not a Unity dev, my work is on Unity is mostly Business Intelligence (AdTech-Marketing Tech) like client side-ML, analytics, live-ops, segmentation etc.
I wasn't expecting to see UnityMCP in the article, so I'm kinda surprised.
When MCP's started popping up, MCP for Unity was the first thing I searched for, I've used it, when it became unmaintained (maybe for a 1-2 months?) I forked it, made a few nice updates, even as a someone that used the project I didn't realized the transition of the repository!
I know hundreds of game devs, so let me share my perspective.
- Your target audience is not HN people, I'm pretty sure %80 of them doesn't even know what MCP is.
- I use Cursor, %99 of game devs are using Rider, MCP tooling and integrations around it not mature enough to gain their attention yet.
- Game devs and their leads are (mostly) dinosaurs that looks skeptical using AI at work, like when I was in Rovio, you weren't allowed to use tools like, Claude, Cursor, OpenAI etc.
- I'm not surprised acquisition of an OSS project didn't get coverage on media.
Would I use it as someone who only knows the basics of Unity Editor?
- Maybe.
Do I think someone who works as a Game dev and knows Unity really well and would use it?
- Probably not. (Note: at the office right now, asked a few game dev's and they were like, naaah)
- - - - -
Your product looks cool, having something that directly works inside Unity is really neat but I don't think it would be the moat.
I think there is so much potential, if you can build a really good agent, think like Manus for Unity, that works directly in Unity and making the MCP related optional because it would be nice to have, Unity Editor part is the easiest part for them.
Ask this, would it be useful for a game developer working in a company like Scopely, Zynga, Dream Games etc?
1. If you're willing to share your Unity MCP project upgrades, I'd love to check it out and perhaps merge into our project. I couldn't find it in your Github.
2. With "Manus for Unity" do you mean an AI tool that does tasks for you in the background? like Codex or Cursor Background Tasks?
2. We've got a couple of companies similar to those you've mentioned using the Coplay product, but they mainly use the Record & Replay feature for liveops pipelines. https://youtu.be/Ia6o4ylI41I
Game dev / producer here. If they haven't tried it, they can't really comment on the potential value, can they? That's like devs in the early days of AI saying, "It sucks, it'll never be good."
One easy case -- it's great at debugging issues that include data in Unity scenes. It can quickly look at a snapshot of editor state / component configuration / serialized data and tell you, "oh, this gameObject's Vector3 pos value is wrong".
Not revolutionary on its own, but a great feature, and there are more.
"Overall, while approaches such as FNet, Performer, and sparse transformers demonstrate that either fixed or approximate token mixing can reduce computational overhead, our adaptive spectral filtering strategy uniquely merges the efficiency of the FFT with a learnable, input-dependent spectral filter. This provides a compelling combination of scalability and adaptability, which is crucial for complex
sequence modeling tasks."
Except that the paper is written as if they discovered that you can use an fft for attention. They even have a "proof". It's in the title. Then you discover everyone already knew this and all they do is as some extra learnable parameters.
Search engines don't always turn up prior art the way you'd like. Simple jargon discrepancies can cause a lot of mischief. Though I'm sure a case could be made about it being confirmation bias. It's hard to get people to search in earnest for bad news. If it's not in your face they declare absence of evidence as evidence of absence.
That seems like an odd comparison, specialty hardware is often better, right?
Hey, do DSPs have special hardware to help with FFTs? (I’m actually asking, this isn’t a rhetorical question, I haven’t used one of the things but it seems like it could vaguely be helpful).
Xilinx has a very highly optimized core for the FFT. You are restricted to power of 2 sizes. Which usually isn't a problem because its fairly common to zero pad an FFT anyway to avoid highly aliased (i.e. hard-edges) binning.
The downside of implementing directly in hardware, the size would be fixed.
> with data loading from either specially designed vector registers (V-mode) or RAM off-the-core (R-mode). The evaluation shows the proposed FFT acceleration scheme achieves a performance gain of 118 times in V-mode and 6.5 times in R-mode respectively, with only 16% power consumption required as compared to the vanilla NutShell RISC-V microprocessor
>The TPU is so inefficient at FTs that the researchers did not use the FFT algorithm on sequences < 4096 elements, instead opting for a quadratic-scaling FT implementation using a pre-computed DFT matrix.
> on an Nvidia Quadro P6000 GPU, the FT was responsible for up to 30% of the inference time on the FNet architecture [0]
This company [0] claimed in 2021 they could squash inference time by 40% if google would use their light chips on TPU. Perhaps more if FFTNet does more heavy lifting.
I have been entertaining myself a bit lately by thinking about the ways in which some improvements to a design are very, very interesting to people when it takes 1.2 machines to do a task, not worth paying attention to when it's 6 machines to do the task, and suddenly very interesting again when it's 120 machines to do the task. There's that weird saddle point in the middle where I cannot get anyone else interested in my 20% resource improvements. It's just crickets.
I would guess that the FFT scales better as you increase the number of tokens in the context window. Interesting Google's models outperform their competitors on context size.
I'm glad someone else had the same thought. I have been wondering what their "secret sauce" is for a while given how their model doesn't degrade for long-context nearly as much as other LLMs that are otherwise competitive. It could also just be that they used longer-context training data than anyone else though.
Yeah but a comparison in power utilization is needed too. You can build hardware that is better than a GPU at something i.e MatMul being really efficient and fast. However, actual FFT hardware would annihilate power and speed at large enough n. Simply because the number of multiplications MatMul does is O(n^3) as opposed to the O(n log n) multiplies that FFT does (complex verse real multiplies with holding).
FFT is only O(N log N) for a vector of length N
WRT to matrices for an N by N matrix it would be like O(N^2 log N) you would perform FFT for each row or column
I still think we are comparing ASIC matmul hardware to non ASIC FFT hardware. The given TPU hardware is doing 256x256 matrix multiplication in linear time by using 256x256 multiplier grids. FFT ASIC could like do the same thing but be able to handle a much higher N size before memory becomes the bottleneck.
Part of the FFT can be accelerated on GPU hardware, which is full of butterfly-like instructions within warps. Using overlap-and-add/overlap-and-save and cuFFTDx can also make use of heavy parrallelism within shared memory. I had a hard time reproducing the tcFFT paper (for lack of time and tensor core skills I guess) but you can also keep your data in Tensor Core registers too, apparently.
The downside of a dedicated ASIC, besides the fixed size of the multipliers, which isn't that big of a deal because matrix multiplication can be broken down into blocks anyway, is the precision(16-bit, 8-bit) and data format (floating point vs. integer/fixed) are immutable
I have a one-liner zsh function that does the same thing, been using it for years, too powerful for me to switch it, and probably has better search capabilities, thanks to fzf.
function hist() {
print -z $( ([ -n "$ZSH_NAME" ] && fc -l 1 || history) | fzf | sed -E 's/ *[0-9]*\*? *//' | sed -E 's/\\/\\\\/g')
}
"I have a one-liner zsh function that does the same thing"
I think you mean "I have a one-liner zsh function that does a tiny subset". It's lacking:
- The ability to filter by current directory
- The ability to filter by current shell session, instead of across all shell sessions (that's assuming you use zsh's shared_history; if not, then the opposite is true: you can only search within the current shell session. See https://zsh.sourceforge.io/Doc/Release/Options.html#index-SH...)
- Search history across hosts
Until recently, I used zsh+fzf, with the default ctrl+r binding replacement provided by fzf. It's been great, but it has lacked functionality that I've wanted for a while now. Atuin fills in these gaps for me.
Now use your function to give you a list of all find commands that took over 10 seconds to run.
Atuin stores _everything_ it can about each command run, what you see when you press C-r is only a tiny subset. And even it gives you the duration and success/failure information immediately.
If you want to, try pressing C-r, select a command from history and press C-o. Normal shell history doesn't store any of that.
I looked it up, the author has PhD in Brain and Cognitive Science.
Are you aware of the fact that even partially implementing something like this, would require multi-year effort with dozens of engineers at minimum and will cost millions of dollars just for training.
"even partially implementing something like this, would require multi-year effort with dozens of engineers at minimum and will cost millions of dollars just for training"
Unfortunately, that means that's also the bar for being able to utter the words "IMPLEMENTING THIS IN A MACHINE WILL ENABLE ARTIFICIAL GENERAL INTELLIGENCE" and being taken seriously. In fact the bar is even higher than that, since merely meeting the criteria you lay out is still no guarantee of success, it is merely an absolute minimum.
The fact that that is a high bar means simply that; it's a high bar. It's not a bar we lower just because it's really hard.
I'm the best musician on earth but I can't play any instruments but I can imagine a really amazing song, you'll just never hear it because it would take 1000s of hours of me practicing to actually learn to play so that I could prove it. So you'll just have to make do with my words and believe me when I say I'm the best musician alive.
Here's an article I wrote describing the song but without actually writing any of the notes because I can't read or write music either.
But I've listened to a lot of music and my tunes are better than those.
I went to school for music description so I know what I'm talking about.
well, you could hire a team of developers and neuroscientists to build a prototype of the idea and concept and do physical research, whether you yourself have the chops to do it yourself is irrelevant at that point.
I wasn't expecting to see UnityMCP in the article, so I'm kinda surprised.
When MCP's started popping up, MCP for Unity was the first thing I searched for, I've used it, when it became unmaintained (maybe for a 1-2 months?) I forked it, made a few nice updates, even as a someone that used the project I didn't realized the transition of the repository!
I know hundreds of game devs, so let me share my perspective.
- Your target audience is not HN people, I'm pretty sure %80 of them doesn't even know what MCP is. - I use Cursor, %99 of game devs are using Rider, MCP tooling and integrations around it not mature enough to gain their attention yet.
- Game devs and their leads are (mostly) dinosaurs that looks skeptical using AI at work, like when I was in Rovio, you weren't allowed to use tools like, Claude, Cursor, OpenAI etc.
- I'm not surprised acquisition of an OSS project didn't get coverage on media.
Would I use it as someone who only knows the basics of Unity Editor?
- Maybe.
Do I think someone who works as a Game dev and knows Unity really well and would use it?
- Probably not. (Note: at the office right now, asked a few game dev's and they were like, naaah)
- - - - -
Your product looks cool, having something that directly works inside Unity is really neat but I don't think it would be the moat.
I think there is so much potential, if you can build a really good agent, think like Manus for Unity, that works directly in Unity and making the MCP related optional because it would be nice to have, Unity Editor part is the easiest part for them.
Ask this, would it be useful for a game developer working in a company like Scopely, Zynga, Dream Games etc?