- Joined
- Sep 10, 2013
- Messages
- 71
- Reaction score
- 0
I've noticed that the default settings for the database, and java seem to be made for computers out of 2002. So some adjustments can really improve server performance.
Here I go over some of the things I've noticed that improve database performance.
In this directory a file called .script holds setting for the database, the changes made here can improve write performance substantially. These two documents helped me *tune* some settings.
http://www.hsqldb.org/web/hsqlPerformanceTests.html
http://stackoverflow.com/questions/10293517/hypersql-hsqldb-massive-insert-performance
http://hsqldb.org/doc/guide/dbproperties-chapt.html
D:\StarMade\server-database\index\.script
SET DATABASE DEFAULT RESULT MEMORY ROWS 0
"This property can be set to specify how many rows of each results or temporary table are stored in memory before the table is written to disk. The default is zero and means data is always stored in memory. If this setting is used, it should be set above 1000."
I've been experimenting with this. It seems like the documentation implies
SET FILES WRITE DELAY 500 MILLIS
By default this will write changes to the log every 500 miliseconds or every half second. SET FILES WRITE DELAY 5 should write changes to the log every 5 seconds.
SET FILES CACHE SIZE 10000
"Indicates the total size (in kilobytes) of rows in the memory cache used with cached tables. This size is calculated as the binary size of the rows, for example an INTEGER is 4 bytes. The actual memory size used by the objects is 2 to 4 times this value. This depends on the types of objects in database rows, for example with binary objects the factor is less than 2, with character strings, the factor is just over 2 and with date and timestamp objects the factor is over 3.
The value can range between 100 KB - 4 GB. The default is 10,000, representing 10,000 kilobytes. If the value is set via SET FILES then it becomes effective after the next database SHUTDOWN or CHECKPOINT."
SET FILES CACHE ROWS 50000
"Indicates the maximum number of rows of cached tables that are held in memory.
The value can range between 100- 4 billion. If the value is set via SET FILES CACHE ROWS then it becomes effective after the next database SHUTDOWN."
SET FILES DEFRAG 0
When a checkpoint is performed, the percentage of wasted space in the .data file is calculated. If the wasted space is above the specified limit, a defrag operation is performed. The default is 0, which means no automatic checkpoint. The numeric value must be between 0 and 100 and is interpreted as a percentage of the current size of the .data file.
This next setting below is probably the most important.
SET FILES NIO TRUE
SET FILES NIO SIZE 256
By default, NIO is used for .data files from 16 MB up to 256 MB. For larger databases with CACHED tables, use of nio improves database access speed significantly. Performance improvements can be tenfold or even higher. You can increase the limit with the SET FILES NIO SIZE statement. There should be enough RAM available to accommodate the memory mapped buffers. For vary large nio usage, a 64 bit JVM must be used. The memory is not taken from the JVM memory allocation, therefore there is no need to increase the -Xmx parameter of the JVM. If not enough memory is available for the specified value, nio is not used.
On your machine, you can improve IO significantly by using a large NIO limit for memory-mapped IO. For exampleSET FILES NIO SIZE 8192. A 64 bit JVM is required for larger sizes to have an effect.
SET FILES LOG TRUE
SET FILES LOG SIZE 50
This option sets whether or not to use the database log. The database log helps the database recover from crashes. Disabling the log will improve performance while making database recovery hard or impossible. I have this here so if you are making changes to the database and are sure there is a very low possiblity of crashing, making a ton of changes can be done faster with the log disabled.
Here I go over some of the things I've noticed that improve database performance.
In this directory a file called .script holds setting for the database, the changes made here can improve write performance substantially. These two documents helped me *tune* some settings.
http://www.hsqldb.org/web/hsqlPerformanceTests.html
http://stackoverflow.com/questions/10293517/hypersql-hsqldb-massive-insert-performance
http://hsqldb.org/doc/guide/dbproperties-chapt.html
D:\StarMade\server-database\index\.script
SET DATABASE DEFAULT RESULT MEMORY ROWS 0
"This property can be set to specify how many rows of each results or temporary table are stored in memory before the table is written to disk. The default is zero and means data is always stored in memory. If this setting is used, it should be set above 1000."
I've been experimenting with this. It seems like the documentation implies
SET FILES WRITE DELAY 500 MILLIS
By default this will write changes to the log every 500 miliseconds or every half second. SET FILES WRITE DELAY 5 should write changes to the log every 5 seconds.
SET FILES CACHE SIZE 10000
"Indicates the total size (in kilobytes) of rows in the memory cache used with cached tables. This size is calculated as the binary size of the rows, for example an INTEGER is 4 bytes. The actual memory size used by the objects is 2 to 4 times this value. This depends on the types of objects in database rows, for example with binary objects the factor is less than 2, with character strings, the factor is just over 2 and with date and timestamp objects the factor is over 3.
The value can range between 100 KB - 4 GB. The default is 10,000, representing 10,000 kilobytes. If the value is set via SET FILES then it becomes effective after the next database SHUTDOWN or CHECKPOINT."
SET FILES CACHE ROWS 50000
"Indicates the maximum number of rows of cached tables that are held in memory.
The value can range between 100- 4 billion. If the value is set via SET FILES CACHE ROWS then it becomes effective after the next database SHUTDOWN."
SET FILES DEFRAG 0
When a checkpoint is performed, the percentage of wasted space in the .data file is calculated. If the wasted space is above the specified limit, a defrag operation is performed. The default is 0, which means no automatic checkpoint. The numeric value must be between 0 and 100 and is interpreted as a percentage of the current size of the .data file.
This next setting below is probably the most important.
SET FILES NIO TRUE
SET FILES NIO SIZE 256
By default, NIO is used for .data files from 16 MB up to 256 MB. For larger databases with CACHED tables, use of nio improves database access speed significantly. Performance improvements can be tenfold or even higher. You can increase the limit with the SET FILES NIO SIZE statement. There should be enough RAM available to accommodate the memory mapped buffers. For vary large nio usage, a 64 bit JVM must be used. The memory is not taken from the JVM memory allocation, therefore there is no need to increase the -Xmx parameter of the JVM. If not enough memory is available for the specified value, nio is not used.
On your machine, you can improve IO significantly by using a large NIO limit for memory-mapped IO. For exampleSET FILES NIO SIZE 8192. A 64 bit JVM is required for larger sizes to have an effect.
SET FILES LOG TRUE
SET FILES LOG SIZE 50
This option sets whether or not to use the database log. The database log helps the database recover from crashes. Disabling the log will improve performance while making database recovery hard or impossible. I have this here so if you are making changes to the database and are sure there is a very low possiblity of crashing, making a ton of changes can be done faster with the log disabled.