I was getting ready to post the exact same thing. The built in event loop is major. Go did a lot of modelling from Erlang, but made tradeoffs that kept it closer to a C-style programming model.
It's a great language and the limits of its concurrency model aren't something that will really be apparent to you if you haven't learned Erlang/Elixir. For most people, it provides a concurrency model that's a cut above everything else out there.
The main differences in Go and BEAM languages (Erlang/Elixir) are cooperative vs prescheduling, shared memory vs immutable message passing, general garbage collection vs heap per process, and lastly general runtime vs isolation.
Cooperative scheduling means that the scheduler has to have control relinquished back to it (such as with I/O events), while prescheduling will take back control. Cooperative means that you can get better end-to-end performance for a benchmark but you run the risk of a code taking over the processor. Prescheduling will allow consistent performance for all operations in the run time without allowing anything to overtake it. That's one of the ways that it's possible to reliably run a database within the runtime alongside the rest of your code on BEAM.
Shared memory with pointers is pretty standard and definitely provides some performance perks, especially when dealing with large data structures. The flip side means that native clustering doesn't work. With BEAM languages that lack shared memory and rely on message passing, you can just as easily call a function on a different server in another data center as you can a function in your current heap space. This makes it possible to smoothly distribute everything without having to worry about updating shared memory on a specific machine or having the state get out of sync. The channels model helps to avoid this, but by even including shared memory in the language you make the trade off of losing natural clustering for distribution.
With shared memory comes garbage collection and GC pauses, although the Go team has done great work optimizing this. With BEAM every new process (equivalent to a go routine) gets it's own heap meaning that it can be independently cleaned up without pausing the entire stack. This also makes hot deployments possible, so doing things like deploying an update to a codebase with millions of active websocket connections can be done without trigger millions of simultaneous re-connections.
The general runtime vs isolation means that a goroutine that blows up and crash the entire system if it's not properly handled at the point of the problem. When writing Go code you find yourself writing a line of error handling for every line of functionality. With BEAM isolation, processes are kicked off with id's and the processes are so inexpensive that the standard method is to create two - one as a supervisor and one as the process. If the process ever crashes for some reason, the supervisor just restarts it immediately. This creates a granular level of isolation and reliability. There is a library for Go that I remember seeing that seems to create a supervisor pattern for reliability though (http://www.jerf.org/iri/post/2930).
Go will win benchmarks because of it the choices the language made, but the benefits for long term run time, reliability, distribution and performance consistency in the face of bad actors will be in favor of Erlang/Elixir.
That said, the steps that Go took toward implementing the closest thing to Erlang-like concurrency makes it the winner by far among the non-BEAM languages.
It's a great language and the limits of its concurrency model aren't something that will really be apparent to you if you haven't learned Erlang/Elixir. For most people, it provides a concurrency model that's a cut above everything else out there.
The main differences in Go and BEAM languages (Erlang/Elixir) are cooperative vs prescheduling, shared memory vs immutable message passing, general garbage collection vs heap per process, and lastly general runtime vs isolation.
Cooperative scheduling means that the scheduler has to have control relinquished back to it (such as with I/O events), while prescheduling will take back control. Cooperative means that you can get better end-to-end performance for a benchmark but you run the risk of a code taking over the processor. Prescheduling will allow consistent performance for all operations in the run time without allowing anything to overtake it. That's one of the ways that it's possible to reliably run a database within the runtime alongside the rest of your code on BEAM.
Shared memory with pointers is pretty standard and definitely provides some performance perks, especially when dealing with large data structures. The flip side means that native clustering doesn't work. With BEAM languages that lack shared memory and rely on message passing, you can just as easily call a function on a different server in another data center as you can a function in your current heap space. This makes it possible to smoothly distribute everything without having to worry about updating shared memory on a specific machine or having the state get out of sync. The channels model helps to avoid this, but by even including shared memory in the language you make the trade off of losing natural clustering for distribution.
With shared memory comes garbage collection and GC pauses, although the Go team has done great work optimizing this. With BEAM every new process (equivalent to a go routine) gets it's own heap meaning that it can be independently cleaned up without pausing the entire stack. This also makes hot deployments possible, so doing things like deploying an update to a codebase with millions of active websocket connections can be done without trigger millions of simultaneous re-connections.
The general runtime vs isolation means that a goroutine that blows up and crash the entire system if it's not properly handled at the point of the problem. When writing Go code you find yourself writing a line of error handling for every line of functionality. With BEAM isolation, processes are kicked off with id's and the processes are so inexpensive that the standard method is to create two - one as a supervisor and one as the process. If the process ever crashes for some reason, the supervisor just restarts it immediately. This creates a granular level of isolation and reliability. There is a library for Go that I remember seeing that seems to create a supervisor pattern for reliability though (http://www.jerf.org/iri/post/2930).
Go will win benchmarks because of it the choices the language made, but the benefits for long term run time, reliability, distribution and performance consistency in the face of bad actors will be in favor of Erlang/Elixir.
That said, the steps that Go took toward implementing the closest thing to Erlang-like concurrency makes it the winner by far among the non-BEAM languages.