This has been copied (with a couple of corrections) from the semi-daily updates posted by schema in our official Discord server, in the #universe-update-dev-news-dump channel. To receive universe update news as it happens, join our Discord channel here: Join the StarMade Discord Server!
We'll be posting more of these dumps over the next few days to get up to date with where we're at now. All update news is available for reading in our Discord server. This post is for those of you not in our Discord channel.
TL;DR
Between the dates of the 1st of October - 10th, the following was done:
You can have a listen to our work in progress soundtrack here:
1st of October
Currently working on the network protocol, replacing it with a new one I've been working on for quite a while. This cleans up a lot of things and provides much needed optimisations. The biggest aspect is that message handling will be easier to synchronise, which should result in less errors resulting from multi-threading. It also utilises more NIO (new input output), which works on native memory as opposed to heap. This memory is faster to access from native functions, like a socket reading to it.
At the same time I'm removing every Observer pattern in the code, which was deprecated in Java 11. I'm replacing it with an interface-driven callback/listener system. This results in far less unnecessary calls as well as much more control over what called which listener. I already wrote all of the network code for a separate framework, so all that needs to be done is to integrate it into StarMade.
2nd of October
All the above was completed after 14 hours. The refactoring was quite massive, 369 changed files with 5,733 additions and 7,207 deletions.
3rd of October
Ok, so after much thought, these are the requirements I want for our audio system:
After a lot of more complicated approaches, I've finally found a simple one that satisfies all of these requirements. Not only that, but it is reusable for other things as well.
I'm going to use a tag system in combination with a pre-processor. What that essentially means is that all audio events don't take more than a line of code. Simple broad grouping can be made with tags like GUI, OK, WEAPON, BEAM, CANNON etc, adding more as needed. Each tag also has a hierarchy weight so they can be sorted (GUI > OK).
This assignment will be done using annotations, prepping it for the pre-processor, which will then read the code.
The pre-processor will then auto-assign unique numerical IDs to each line that fires an audio event (maintaining and updating old already assigned ones, removing deleted ones, and adding new ones into a config database). In-game, all that line will do is fire the numerical ID, which means it takes minimal overhead.
What happens with the event is then decided by what is assigned in the config to that ID. The config will be read at startup, so all audio events have an endpoint.
The config can then be edited with in-game tools. Essentially you get a list of tag groups like this:
ID 2342: GUI, MAIN_MENU, CLOSE with some more context autogenerated from the preprocessor (class and package name of the event origin).
Audio is assigned to combinations of tags instead of individual events most of the time, even though individual overwrite is always possible.
The tool will also have a feature to display what events just fired. So if you do anything that still needs a sound attached, the tool will live display it. This should be the best and fastest way to give everything a sound with minimal organisational effort and minimal code clutter.
All the sound modification can be baked into the config too (eventual pitch, length, optimisation parameters) as well as attaching a position to a sound. It would just fire the sound with positional data as well as a control option START/UPDATE/STOP.
4 hours later
Alrighty. All GUI actions should have an audio event attached to them that should be tagged the right way (I'll make it so i can flag possibly wrong tagging later and change it to auto reflect back into code). Another big refactor (2,029 changed files with 6,306 additions and 9,051 deletions)
4th of October
Today was fixing bugs with the integration of the new network code. The network code is fully working now. Going to do some more audio work in the evening.
5th of October
Did some more work to audiofy the code, for weapons/modules. In the meta data there will also be options to layer audio depending on distance (explosions sound different from far away than close up)
6th of October
More "audiofying" of events. Some smaller new requirements popped up for that:
One nice thing is that this system can be reused for general event handling, e.g. particle systems and possibly modding.
7th of October
Just implementing the ambience manager still.
8th of October
Alright. A lot of things have audio events attached to it now, including metadata of keeping track of events that need to be started and stopped. Now I'll implement the pre-processor function that will read all the events in code and attaches an ID to it, as well as transferring it into a ID->Event config. After that the tool to attach an actual sound to an event can be made.
9th of October
During my research I found javaparser/javaparser which seems to be perfect for preprocessing. I've read the documentation and did a few examples during research.
It is a very powerful tool that essentially parses java code and puts it into a meta model.
So instead of having to parse each file line by line using regular expressions specifically for the function calls, which in this case would be a mess and quite error-prone, I'm using this library which does all the parsing for me.
It gives you a complete tree of your code, including symbol solving (metadata from imports), so you can just search for the specific function, extract all arguments, and modify the data, and output into code.
So what I'll be doing is looking for calls of "fireAudioEvent", which has multiple versions depending on complexity. The simplest ones are client global events for GUI audio, like clicking a OK button. In this case the arguments of this functions would just be the audio Tags AudioTag.GUI, AudioTag.BUTTON, AudioTag.OK etc. What the preprocessor would do is assign that event a unique ID, then it would put that ID and the tags into a config file. After that it would change the function fireAudioEvent to fireAudioEventID which only has that ID as an argument. Any future changes of Tags would be done in-editor which modifies the config file. That means the meta way to specify the function is just the entry point to classify the audio event initially. Any event can of course be easily reverted to its original state.
3 hours later
Alrighty. Got that working.
For a test class this would be the snippet in code:
as the simplest form of an audio event (a gui event that would fire when hovering over a button)
The parser catches that call and makes sure it is indeed that method being called by resolving the type (so essentially this wouldn't fail even if I had the same method name declared somewhere else).
It produces a new entry, which is then saved to the config as
(The output tag would be where all the data on what to do on that event goes)
At the same time the code id modified using the new ID:
So performance wise, there is close to no overhead from the system itself since all it's going to do ingame is call a function with an ID.
For the editor in-game you will have a list of events fired available. So when you hover you would see this event with the ID 1 being fired, you would then click on that and either directly assign a sound individually or assign a sound to that set of Tags, which would then cover all hover sounds unless it's been overwritten by an individual assignment for that ID.
next up is implementing the more advanced calls that have context (sounds that need spatial information and/or context on what object it belongs to (e.g. an ambient sound that is emitted by a ship))
10th of October
Alrighty. Preprocessor is done and now we have a nice config file with 960 entries for audio events. Next will be the actual handling of audio events and the playing of sounds at the according times.
We'll be posting more of these dumps over the next few days to get up to date with where we're at now. All update news is available for reading in our Discord server. This post is for those of you not in our Discord channel.
TL;DR
Between the dates of the 1st of October - 10th, the following was done:
- New network protocol implemented, which provides much needed optimisations. Message handling is easier to synchronise, resulting in less errors from multi-threading. NIO implemented, which works on native memory as opposed to heap. This memory is much faster to access from native functions. Finally an interface-driven callback/listener system, resulting in far less unnecessary calls and more control over what called which listener.
- Audio Engine, designed in a way to be highly customisable, easily implementable into our existing codebase and focuses on performance. Also, allowing playermade soundpacks to be created. The system we're using can also be extended for particle effects and maybe even modding! Much work on the audio engine was done during this time, with many events in the game now having audio events attached to them.
You can have a listen to our work in progress soundtrack here:
1st of October
Currently working on the network protocol, replacing it with a new one I've been working on for quite a while. This cleans up a lot of things and provides much needed optimisations. The biggest aspect is that message handling will be easier to synchronise, which should result in less errors resulting from multi-threading. It also utilises more NIO (new input output), which works on native memory as opposed to heap. This memory is faster to access from native functions, like a socket reading to it.
At the same time I'm removing every Observer pattern in the code, which was deprecated in Java 11. I'm replacing it with an interface-driven callback/listener system. This results in far less unnecessary calls as well as much more control over what called which listener. I already wrote all of the network code for a separate framework, so all that needs to be done is to integrate it into StarMade.
2nd of October
All the above was completed after 14 hours. The refactoring was quite massive, 369 changed files with 5,733 additions and 7,207 deletions.
3rd of October
Ok, so after much thought, these are the requirements I want for our audio system:
- Ease of use, one of the hardest things is to organise sounds in groups, so that you can assign a sound to multiple events of a similar type. For example, assigning one sound for all OK buttons. However, it should also have the freedom of being granular so that we can assign a sound for an individual action if needed.
- Code clutter, putting out an event for audio should not take more than a line, and also auto manage itself into a meta state, so that audio can be assigned within a config and a tool.
- Management: Tools for audio that handle the assignment, type and parameters of the sound.
After a lot of more complicated approaches, I've finally found a simple one that satisfies all of these requirements. Not only that, but it is reusable for other things as well.
I'm going to use a tag system in combination with a pre-processor. What that essentially means is that all audio events don't take more than a line of code. Simple broad grouping can be made with tags like GUI, OK, WEAPON, BEAM, CANNON etc, adding more as needed. Each tag also has a hierarchy weight so they can be sorted (GUI > OK).
This assignment will be done using annotations, prepping it for the pre-processor, which will then read the code.
The pre-processor will then auto-assign unique numerical IDs to each line that fires an audio event (maintaining and updating old already assigned ones, removing deleted ones, and adding new ones into a config database). In-game, all that line will do is fire the numerical ID, which means it takes minimal overhead.
What happens with the event is then decided by what is assigned in the config to that ID. The config will be read at startup, so all audio events have an endpoint.
The config can then be edited with in-game tools. Essentially you get a list of tag groups like this:
ID 2342: GUI, MAIN_MENU, CLOSE with some more context autogenerated from the preprocessor (class and package name of the event origin).
Audio is assigned to combinations of tags instead of individual events most of the time, even though individual overwrite is always possible.
The tool will also have a feature to display what events just fired. So if you do anything that still needs a sound attached, the tool will live display it. This should be the best and fastest way to give everything a sound with minimal organisational effort and minimal code clutter.
All the sound modification can be baked into the config too (eventual pitch, length, optimisation parameters) as well as attaching a position to a sound. It would just fire the sound with positional data as well as a control option START/UPDATE/STOP.
4 hours later
Alrighty. All GUI actions should have an audio event attached to them that should be tagged the right way (I'll make it so i can flag possibly wrong tagging later and change it to auto reflect back into code). Another big refactor (2,029 changed files with 6,306 additions and 9,051 deletions)
4th of October
Today was fixing bugs with the integration of the new network code. The network code is fully working now. Going to do some more audio work in the evening.
5th of October
Did some more work to audiofy the code, for weapons/modules. In the meta data there will also be options to layer audio depending on distance (explosions sound different from far away than close up)
6th of October
More "audiofying" of events. Some smaller new requirements popped up for that:
- Remote events that only trigger on the server but the client received a more general event (e.g. activation gate). It will use the same system (server will not trigger normal sound events but handle sending of remote ones. Just required more info on where to send the event to (object id etc)
- Sub-ID for events. Some events that require state changes (start, stop), need an extra ID to handle events (e.g. beams fired and stopped) automatically
- Ship Ambience: some blocks will emit ambient sound (like reactors, factories, thrusters, etc). The same system is made to handle those, but an extra layer of management to automatically start/stop as well as update sound events for block collections (<- I'm here)
One nice thing is that this system can be reused for general event handling, e.g. particle systems and possibly modding.
7th of October
Just implementing the ambience manager still.
8th of October
Alright. A lot of things have audio events attached to it now, including metadata of keeping track of events that need to be started and stopped. Now I'll implement the pre-processor function that will read all the events in code and attaches an ID to it, as well as transferring it into a ID->Event config. After that the tool to attach an actual sound to an event can be made.
9th of October
During my research I found javaparser/javaparser which seems to be perfect for preprocessing. I've read the documentation and did a few examples during research.
It is a very powerful tool that essentially parses java code and puts it into a meta model.
So instead of having to parse each file line by line using regular expressions specifically for the function calls, which in this case would be a mess and quite error-prone, I'm using this library which does all the parsing for me.
It gives you a complete tree of your code, including symbol solving (metadata from imports), so you can just search for the specific function, extract all arguments, and modify the data, and output into code.
So what I'll be doing is looking for calls of "fireAudioEvent", which has multiple versions depending on complexity. The simplest ones are client global events for GUI audio, like clicking a OK button. In this case the arguments of this functions would just be the audio Tags AudioTag.GUI, AudioTag.BUTTON, AudioTag.OK etc. What the preprocessor would do is assign that event a unique ID, then it would put that ID and the tags into a config file. After that it would change the function fireAudioEvent to fireAudioEventID which only has that ID as an argument. Any future changes of Tags would be done in-editor which modifies the config file. That means the meta way to specify the function is just the entry point to classify the audio event initially. Any event can of course be easily reverted to its original state.
3 hours later
Alrighty. Got that working.
For a test class this would be the snippet in code:
Code:
...
fireAudioEvent(AudioTags.GUI, AudioTags.BUTTON, AudioTags.PRESS, AudioTags.HOVER);
...
The parser catches that call and makes sure it is indeed that method being called by resolving the type (so essentially this wouldn't fail even if I had the same method name declared somewhere else).
It produces a new entry, which is then saved to the config as
Code:
<Entry>
<Id>1</Id>
<Version>1</Version>
<Tags>
<Tag>BUTTON</Tag>
<Tag>GUI</Tag>
<Tag>PRESS</Tag>
<Tag>HOVER</Tag>
</Tags>
<Name/>
<Param>ONE_TIME</Param>
<IsRemote>false</IsRemote>
<HasArgument>false</HasArgument>
<Output/>
</Entry>
At the same time the code id modified using the new ID:
Code:
...
fireAudioEventID(1);
...
So performance wise, there is close to no overhead from the system itself since all it's going to do ingame is call a function with an ID.
For the editor in-game you will have a list of events fired available. So when you hover you would see this event with the ID 1 being fired, you would then click on that and either directly assign a sound individually or assign a sound to that set of Tags, which would then cover all hover sounds unless it's been overwritten by an individual assignment for that ID.
next up is implementing the more advanced calls that have context (sounds that need spatial information and/or context on what object it belongs to (e.g. an ambient sound that is emitted by a ship))
10th of October
Alrighty. Preprocessor is done and now we have a nice config file with 960 entries for audio events. Next will be the actual handling of audio events and the playing of sounds at the according times.