In a new Foreign Policy piece, Ray Wang and I argue that Chinese tech firms don't want to use Huawei's AI chips. But export controls on Nvidia's chips could force them to finally switch.
on what basis are you making the assertion Chinese developers prefer the watered down Nvidia chips over Huawei Ascend chips, other than Haiwei might be a competitor? Do you assume Chinese developers are not smart enough to realize Nvidia chips can be cut off anytime at the whim of the regime or there is geofencing backdoors and remote kill switch already or potentially embedded in these chips? The whole reasonsing is one sided and premised on your being smarter than the other side. Good luck.
Least people think top-down industrial policy is inferior, I remind people of ping-pong when the penhold grip favored by PRC got wiped out by the forehand (smash) tactics. The officials build a parallel team to mimic the grip and devised new tactics / training to counter the high-speed forward spin. It is an open question whether a novel TPU architecture will emerge Athena-like from industrial labs ... its a big ask from compilers, to maths libraries to mem-bandwidth efficient vector stores to cloud/fog deployment tools.
I’m confused. How is this different from what the US is already doing?
on what basis are you making the assertion Chinese developers prefer the watered down Nvidia chips over Huawei Ascend chips, other than Haiwei might be a competitor? Do you assume Chinese developers are not smart enough to realize Nvidia chips can be cut off anytime at the whim of the regime or there is geofencing backdoors and remote kill switch already or potentially embedded in these chips? The whole reasonsing is one sided and premised on your being smarter than the other side. Good luck.
> end up helping China’s domestic chipmakers like Huawei if not done carefully
Have the powers that be decided to jump b4 being pushed? (xref https://www.tomshardware.com/tech-industry/artificial-intelligence/top-china-silicon-figure-calls-on-country-to-stop-using-nvidia-gpus-for-ai-says-current-ai-development-model-could-become-lethal-if-not-addressed) DeepSeek have shown they can work close to the hardware layer, rebalancing the grunt (SIMD ops) to surface (comms) ratio. So if the current datacentres are built-out using the current gen of GPUs/TPUs, will taking a leapfrog approach ... say chip-to-chip photonics yield advantages in the inference step which is more application specific?
Least people think top-down industrial policy is inferior, I remind people of ping-pong when the penhold grip favored by PRC got wiped out by the forehand (smash) tactics. The officials build a parallel team to mimic the grip and devised new tactics / training to counter the high-speed forward spin. It is an open question whether a novel TPU architecture will emerge Athena-like from industrial labs ... its a big ask from compilers, to maths libraries to mem-bandwidth efficient vector stores to cloud/fog deployment tools.
Why would you want to advise the US how to hobble China’s development?