Summary: We have released GPT-J-6B, 6B JAX-based (Mesh) Transformer LM (Github).GPT-J-6B performs nearly on par with 6.7B GPT-3 (or Curie) on various zero-shot down-streaming tasks.You can try out this Colab notebook or free web demo.This library also serves as an example of model parallelism with xmap on JAX. Below, we will refer to GPT-J-6B by GPT-J in short. Why does this project matter? GPT-J