[Paper] Power-of-Two Quantization-Aware-Training (PoT-QAT) in Large Language Models (LLMs)
In Large Language Models (LLMs), the number of parameters has grown exponentially in the past few years, e.g., from 1.5 billion parameters in GPT-2 to 175 billi...