Llama 3 release is a big deal for the open source community. It's 7B model could run in lots of people's computers and its 70B model could be run by the AI community who care more about AI applications.
And finally, the 400B (still training) model could (i assume) still be ran by some kinda rich people when Apple releases the next Mac Studio lineup (rumoured maximum of half terabyte kinda VRAM).
Meta and Zuckerberg are cooking something big here with this release. Llama 3 blows everything else (open and close source) out of the water when you take into account parameter size. Only disappointing thing is is limited context length, but since they promised that they will release with longer context length in the following months, let's wait