[D] How to contribute to the development and/or research about TPU?
I’m recently into studying about deep learning accelerators, and the one I’m most interested in is TPU v3. Given that faster multiplication of much larger matrices is crucial for further development of the cutting edge generative models (GPT-2, Sparse Transformer), MAC bandwidth is becoming a severe bottleneck to the speed. To resolve this issue, I’d like to contribute to the development of TPU at various scales. However, given the scarcity of the publicly available documents of TPU v3 and its ongoing research, I have no idea what their research/development group considers to be the current bottleneck to their project. What can I do? I’m a PhD student of ML.
submitted by /u/HigherTopoi
[link] [comments]