Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

this is like 5 different questions all across the landscape - what exactly do you think answers will do for you?

> How often do hardware optimizations get created for lower level optimization of LLMs and Tensor physics?

LLMs? all the time? "tensor physics" (whatever that is) never

> How reconfigurable are TPUs?

very? as reconfigurable as any other programmable device?

> Are there any standardized feature flags for TPUs yet?

have no idea what a feature flag is in this context nor why they would be standardized (there's only one manufacturer/vendor/supplier of TPUs).

> Is TOPS/Whr a good efficiency metric for TPUs and for LLM model hosting operations?

i don't see why it wouldn't be? you're just asking is (stuff done)/(energy consumed) a good measure of efficiency to which the answer is yes?





> have no idea what a feature flag is in this context nor why they would be standardized (there's only one manufacturer/vendor/supplier of TPUs).

X86, ARM, and RISC have all standardized on feature flags which can be reviewed on Linux with /proc/cpuinfo or with dmidecode.

  cat /proc/cpuinfo | grep -E '^processor|Features|^BogoMIPS|^CPU'
There are multiple TPU vendors. I listed multiple AI accelerator TPU products in the comment you are replying to.

> How reconfigurable are TPUs?

TIL Google's TPUs are reconfigurable with OCS Optical Circuit Switches that can be switched between for example 3D torus or twisted torus configurations.

(FWIW also, quantum libraries mostly have Line qubits and Lattice qubits. There is a recent "Layer Coding" paper; to surpass Surface Coding.)

But classical TPUs;

I had already started preparing a response to myself to improve that criteria; And then paraphrasing from 2.5pro:

> Don't rank by TOPS/wHr alone; rank by TOPS/wHr @ [Specific Precision]. Don't rank by Memory Bandwidth alone; rank by Effective Bandwidth @ [Specific Precision].

Hardware Rank criteria for LLM hosting costs:

Criterion 1: EGB (Effective Generative Bandwidth) Memory Bandwidth (GB/s) / Precision (Bytes)

Criterion 2: GE (Generative Efficiency) EGB / Total Board Power (Watts)

Criterion 3: TTFT Potential Raw TOPS @ Prompt Precision

LLM hosting metrics: Tokens Per Second (TPS) for throughput, Time to First Token (TTFT) for latency, and Tokens Per Joule for efficiency.


> There are multiple TPU vendors

There are not - TPU is literally a Google trademark:

> Tensor Processing Unit (TPU) is an AI accelerator application-specific integrated circuit (ASIC) developed by Google.

https://en.wikipedia.org/wiki/Tensor_Processing_Unit

The rest of what you're talking about is irrelevant





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: