[D] Will AWS/GCP or any cloud platform ever be cheaper than owning your own GPUs in the long run?
Every way I look at spec’ing out compute infrastructure points to owning your own equipment when it comes to cost for training models. How is this possible? Why aren’t AWS and Google competitive at scale? Surely they have much cheaper power and hardware costs.