AuroraMC is a very advanced Minecraft network that uses hundreds of thousands of lines of code to keep running, as well as several other services that are essential to push updates, create new content and get the network running as smoothly as possible. To cater to those who are curious as well as stay as transparent as possible, we've put this document together to give you an insight into how the network operates, how we create updates and the tools we use in order to run the network.
Connecting you to our services
I thought I'd start off with how we connect you to our servers and the layers we have in between you and the servers.
To start off, when you attempt to connect to our services, for security reasons, you are connected to a DDoS protection service. This is the reason you may notice a little delay when loading into our website from time to time. That is our DDoS protection service protecting us from DDoS attacks and keeping our services running smoothly! What happens next is dependant on what service you are trying to access.
When connecting to our website, you are connected to our hosting provider who deals with routing traffic and dealing with serving and generating web pages. When attempting to connect to the store, you are handed off to another of our hosted servers, which then deals with generating and serving our store.
When connecting to our Minecraft Network, once you have passed the DDoS protection layer, you will get handed off to our load balancer layer. Our load balancer layers will direct your connection to one of our connection nodes, ensuring that our connection nodes have roughly the same amount of connection active each and that none of them are overloaded with too many players. Once the connection has been handed off to a connection node, the connection node then connects you to our servers. This connection node deals with connecting you between servers, communicating with other players/connection nodes and handling some of the network features. The connection nodes essentially act as a proxy, exactly how BungeeCord works (our connection nodes are entirely based on BungeeCord, albeit a modified BungeeCord). Below is an image to illustrate how you connect to our services!
So much data!
We use many services at AuroraMC to keep our network online, and the main service which you will probably want to know about is how we store all that data!
We use 2 different types of databases at AuroraMC, MySQL and Redis. Technically Redis is more of a data store rather than a database, but we use both MySQL and Redis for several essential services across the network. While Redis was decided fairly easily for some features, the debate with databases was as to which type of database to use for most of our core features. There were a few options we sieved through including MongoDB which was our other main competitor, but we ultimately decided on MySQL, as while it is considered slower than other DB's (especially with large tables), information stored on it is not needed quickly, and the advantages of MySQL far outweigh the downsides of the slower database.
In MySQL, we store things like player profiles (ranks, economy details, friends etc), punishments, forums data, store data and a selection of other data. Redis is used to store Statistics, Plus subscriptions, and preferences.
To help us manage all that data, we use an instance of phpMyAdmin, which is running on a local webserver. It should be noted that our forums, store and all other MySQL data are stored on 3 completely separate, isolated MySQL servers. Downtime on one does not affect the other 2.
How we manage the network
As you can imagine, managing a network of this complexity is a mammoth task, and we have several internal tools to help us to do that.
Managing servers and connection nodes
In order to help us manage the number of servers we have, we employ a number of services and techniques to manage, create and destroy servers and connection nodes when they are needed.
On one of our servers, we run what we call Mission Control. Mission control is our network Daemon and is responsible for creating and managing all of the servers and connection nodes that players can connect to. As servers and connection nodes are needed, they are created. For instance, if a game is in high demand and all of the current servers are either full or in a game, it will create a new instance of a server for that game and let the lobby servers and connection nodes know that there is a new server that players can connect to. The same works in reverse too! If there is only 1 full server, and all of the other servers are either empty or partially full, then the daemon will remove servers from the list of active servers and allocate those resources somewhere else.
Mission Control also manages pushing updates to the live network, telling servers to restart themselves, we'll explain more about that later in the thread.
Other internal tools and utilities
We have several other internal tools and utilities that we use to manage the network. As an admin team, we have a custom-built admin panel that we can use to view and manage areas of the network, including:
Libraries we utilise
In order to make AuroraMC possible, and as efficient as possible, we utilise a selection of open-source and custom libraries in order to make AuroraMC possible. In order to stay transparent with the libraries we use, we are including them in this thread.
Please bear in mind, while we will list most of the libraries we use, we will not be releasing what versions of the libraries we are using and some libraries have been omitted from the list to prevent security issues and leaking some internal information.
Open Source Utilities
We utilise several open-source repositories, many of which are provided by The Apache Foundation and other similar organisations. The list is as follows:
Custom Utilities
While we won't be going into specific detail about our custom libraries, I think it's nice for people to be able to see what kinds of utilities we've developed to get a kind of sense of what goes on behind the scenes.
One of the main libraries we have created is what we call the Communications Protocol. In short, it is a way of allowing servers and connection nodes to talk to one another (server-to-server, proxy-to-proxy. In the event that it's a proxy-to-server or server-to-proxy message, we use the Plugin Messaging channel provided by BungeeCord/Spigot). If you've ever encountered RabbitMQ, this is the same concept except without the central server. Whenever a server (or proxy) needs to communicate with another server/proxy, the server will use the Communications Protocol in order to send messages back and forth. For example, when attempting to message someone that is on a different connection node, the communications protocol is utilised to send that message to that user!
Another library that we've developed is a nice way for Developers to generate GUI's on the fly. This library allows developers to dynamically create and open GUI's for players and allows them to choose custom logic for what happens when an item is clicked. All they need to do is give the GUI a name then in the GUI constructor, define what items go where. Then they define in a GUI#onClick method what they want to happen when an item is clicked. Once the GUI is generated, all they have to do is open it for the player, and hey presto! The server handles everything else!
Creating and pushing updates to the network
We use several different systems to create, test and push updates to our network. Before we go into details about that, I think it's important we describe how our plugins are structured.
The network is comprised of 7 separate modules. These are:
Creating updates
Creating updates is relatively simple. Our developers use an IDE of their choice to code our updates. In order to manage all of our code, we use Git source control to allow developers to manage code, collaborate and be able to perform tasks such as code reviews and to be able to push updates to the network.
In order to manage things such as dependencies, we use Maven Build Management as well as a repository manager (Sonatype Nexus to be exact) to deal with things such as using the Spigot API, using the AuroraMC Core API in other plugins, and using the Game Engine in the games plugin.
Testing updates
In order to test our updates, we have what we call the testing network. It is a network, separated from the main network, where all alpha, beta and in-progress development branches are tested. When our developers need to test a branch, all they need to do is push the updates needing to be tested to the branch they are working on, and use an internal tool to tell the testing network to build the required plugin and create a server with that plugin on it.
Once the server is online, the developer can then test whatever they need to, with or without the assistance of our Quality Assurance team. Once they are done, they use the tool to tell the daemon to remove the server and destroy the instance of the server.
Pushing updates
Pushing updates to the network is not a simple task. As soon as the code needing to be pushed is merged into our Master branch on git, our Continuous Integration server (or CI server for short) is told to build the update. Once the build is complete, we are informed that it is complete and it is ready to be pushed to the live network.
Once it is ready, we use a custom-built internal tool to tell Mission Control to restart all relevant servers. For instance, if we push an update to friends, we need to push the update to all connection nodes and servers, whereas if we push an update to the lobby, only the lobby needs to be updated so only restarts those servers.
In order to prevent everyone from being kicked from the network and people being kicked mid-game, our servers will be scheduled to restart staggered, and when the server itself does not have a game in progress. If the server has a game in progress and Mission Control tells it to restart, it will wait until its current game ends before restarting itself.
Finishing Off
I hope that this thread gives you a look into how our network operates and give you some insight on how we get our updates from our screens to yours!
Connecting you to our services
I thought I'd start off with how we connect you to our servers and the layers we have in between you and the servers.
To start off, when you attempt to connect to our services, for security reasons, you are connected to a DDoS protection service. This is the reason you may notice a little delay when loading into our website from time to time. That is our DDoS protection service protecting us from DDoS attacks and keeping our services running smoothly! What happens next is dependant on what service you are trying to access.
When connecting to our website, you are connected to our hosting provider who deals with routing traffic and dealing with serving and generating web pages. When attempting to connect to the store, you are handed off to another of our hosted servers, which then deals with generating and serving our store.
When connecting to our Minecraft Network, once you have passed the DDoS protection layer, you will get handed off to our load balancer layer. Our load balancer layers will direct your connection to one of our connection nodes, ensuring that our connection nodes have roughly the same amount of connection active each and that none of them are overloaded with too many players. Once the connection has been handed off to a connection node, the connection node then connects you to our servers. This connection node deals with connecting you between servers, communicating with other players/connection nodes and handling some of the network features. The connection nodes essentially act as a proxy, exactly how BungeeCord works (our connection nodes are entirely based on BungeeCord, albeit a modified BungeeCord). Below is an image to illustrate how you connect to our services!
So much data!
We use many services at AuroraMC to keep our network online, and the main service which you will probably want to know about is how we store all that data!
We use 2 different types of databases at AuroraMC, MySQL and Redis. Technically Redis is more of a data store rather than a database, but we use both MySQL and Redis for several essential services across the network. While Redis was decided fairly easily for some features, the debate with databases was as to which type of database to use for most of our core features. There were a few options we sieved through including MongoDB which was our other main competitor, but we ultimately decided on MySQL, as while it is considered slower than other DB's (especially with large tables), information stored on it is not needed quickly, and the advantages of MySQL far outweigh the downsides of the slower database.
In MySQL, we store things like player profiles (ranks, economy details, friends etc), punishments, forums data, store data and a selection of other data. Redis is used to store Statistics, Plus subscriptions, and preferences.
To help us manage all that data, we use an instance of phpMyAdmin, which is running on a local webserver. It should be noted that our forums, store and all other MySQL data are stored on 3 completely separate, isolated MySQL servers. Downtime on one does not affect the other 2.
How we manage the network
As you can imagine, managing a network of this complexity is a mammoth task, and we have several internal tools to help us to do that.
Managing servers and connection nodes
In order to help us manage the number of servers we have, we employ a number of services and techniques to manage, create and destroy servers and connection nodes when they are needed.
On one of our servers, we run what we call Mission Control. Mission control is our network Daemon and is responsible for creating and managing all of the servers and connection nodes that players can connect to. As servers and connection nodes are needed, they are created. For instance, if a game is in high demand and all of the current servers are either full or in a game, it will create a new instance of a server for that game and let the lobby servers and connection nodes know that there is a new server that players can connect to. The same works in reverse too! If there is only 1 full server, and all of the other servers are either empty or partially full, then the daemon will remove servers from the list of active servers and allocate those resources somewhere else.
Mission Control also manages pushing updates to the live network, telling servers to restart themselves, we'll explain more about that later in the thread.
Other internal tools and utilities
We have several other internal tools and utilities that we use to manage the network. As an admin team, we have a custom-built admin panel that we can use to view and manage areas of the network, including:
- Punishments
- Rules
- The Chat Filter
- Username Blacklist
- Live statistics and network metrics
- Store packages
Libraries we utilise
In order to make AuroraMC possible, and as efficient as possible, we utilise a selection of open-source and custom libraries in order to make AuroraMC possible. In order to stay transparent with the libraries we use, we are including them in this thread.
Please bear in mind, while we will list most of the libraries we use, we will not be releasing what versions of the libraries we are using and some libraries have been omitted from the list to prevent security issues and leaking some internal information.
Open Source Utilities
We utilise several open-source repositories, many of which are provided by The Apache Foundation and other similar organisations. The list is as follows:
- Java MySQL Connector, Oracle
- commons-io, The Apache Software Foundation
- jline
- Jedis, Redis
- commons-dbcp2, The Apache Software Foundation
- commons-text, The Apache Software Foundation
- httpcore, The Apache Software Foundation
- httpclient5, The Apache Software Foundation
- JSON
- Pterodactyl4J, Matt Maloc
Custom Utilities
While we won't be going into specific detail about our custom libraries, I think it's nice for people to be able to see what kinds of utilities we've developed to get a kind of sense of what goes on behind the scenes.
One of the main libraries we have created is what we call the Communications Protocol. In short, it is a way of allowing servers and connection nodes to talk to one another (server-to-server, proxy-to-proxy. In the event that it's a proxy-to-server or server-to-proxy message, we use the Plugin Messaging channel provided by BungeeCord/Spigot). If you've ever encountered RabbitMQ, this is the same concept except without the central server. Whenever a server (or proxy) needs to communicate with another server/proxy, the server will use the Communications Protocol in order to send messages back and forth. For example, when attempting to message someone that is on a different connection node, the communications protocol is utilised to send that message to that user!
Another library that we've developed is a nice way for Developers to generate GUI's on the fly. This library allows developers to dynamically create and open GUI's for players and allows them to choose custom logic for what happens when an item is clicked. All they need to do is give the GUI a name then in the GUI constructor, define what items go where. Then they define in a GUI#onClick method what they want to happen when an item is clicked. Once the GUI is generated, all they have to do is open it for the player, and hey presto! The server handles everything else!
Creating and pushing updates to the network
We use several different systems to create, test and push updates to our network. Before we go into details about that, I think it's important we describe how our plugins are structured.
The network is comprised of 7 separate modules. These are:
- Our Core: Our core contains all of the core systems needed by the entirety of the network and our core API handling things like player connections, chat, filter, punishments etc.
- Our Proxy Core: Similarly to our actual core, it contains all the core systems related to connection nodes. A significant portion of the core and proxy core codebase is similar if not the same.
- Our Lobby: Our Lobby plugin contains all functionality needed for lobby servers only, such as loading in the lobby map, lobby games, opening crates and more.
- Our Game Engine: The game engine deals with how games and maps are loaded, how games are stated, and the core API for games to use.
- Our Games: This module contains the code for all of the games that are available to play on AuroraMC (including previous/removed games).
- Our Build Core: This is the core for our internal Build server which manages how maps are pushed to the network, in-progress builds etc.
- Our Event Core: The Event Core manages all of the commands, special games and more that are utilised by our Events Team.
Creating updates
Creating updates is relatively simple. Our developers use an IDE of their choice to code our updates. In order to manage all of our code, we use Git source control to allow developers to manage code, collaborate and be able to perform tasks such as code reviews and to be able to push updates to the network.
In order to manage things such as dependencies, we use Maven Build Management as well as a repository manager (Sonatype Nexus to be exact) to deal with things such as using the Spigot API, using the AuroraMC Core API in other plugins, and using the Game Engine in the games plugin.
Testing updates
In order to test our updates, we have what we call the testing network. It is a network, separated from the main network, where all alpha, beta and in-progress development branches are tested. When our developers need to test a branch, all they need to do is push the updates needing to be tested to the branch they are working on, and use an internal tool to tell the testing network to build the required plugin and create a server with that plugin on it.
Once the server is online, the developer can then test whatever they need to, with or without the assistance of our Quality Assurance team. Once they are done, they use the tool to tell the daemon to remove the server and destroy the instance of the server.
Pushing updates
Pushing updates to the network is not a simple task. As soon as the code needing to be pushed is merged into our Master branch on git, our Continuous Integration server (or CI server for short) is told to build the update. Once the build is complete, we are informed that it is complete and it is ready to be pushed to the live network.
Once it is ready, we use a custom-built internal tool to tell Mission Control to restart all relevant servers. For instance, if we push an update to friends, we need to push the update to all connection nodes and servers, whereas if we push an update to the lobby, only the lobby needs to be updated so only restarts those servers.
In order to prevent everyone from being kicked from the network and people being kicked mid-game, our servers will be scheduled to restart staggered, and when the server itself does not have a game in progress. If the server has a game in progress and Mission Control tells it to restart, it will wait until its current game ends before restarting itself.
Finishing Off
I hope that this thread gives you a look into how our network operates and give you some insight on how we get our updates from our screens to yours!

