Home Linux Development Own build system on Linux

Own build system on Linux

by admin

Own build system on Linux
Hello! It’s been a while since I’ve been on here as a speaker, but this time I thought I’d share something I’ve done myself and also find out if it’s necessary, not necessary, how it can be refined and generally hear any feedback on my deeds.


The problem of building and running a project on different machines has always haunted me. In order to realistically simulate the work of the site under development on the local machine to installthe Web server, Application server, perhaps joined by some other intermediate server, installthe database, configure the database. In order to install the test site on the test server, you need to do the same work. And later the same with the working server.
The problem seems to be solved easily – write all the commands in a file and just runit everywhere. The solution is relatively good, but not perfect, and here’s why. For example, one of the servers already has the right packages installed and the database is ready there. But not all the way through, the latest migrations have not been applied to it. You have to open the file with commands and pull there what you need to do to avoid error or breaking something.
But it’s not such a serious problem, b about The biggest problem I identified for myself with Django. Django, as you know, when it starts up hangs in memory, and if the code is changed – these changes have no effect on the site. You have to constantly restart the server. It is not difficult. And if the models are changed, you have to create and apply migrations as well? And if the web server settings are changed, then you also need to apply them and restart the web server? What if I opened the project a month ago and have absolutely no recollection of what I changed there, and I would like to "do it well", but I do not want to tediously type in all the commands? And if the project is huge and I don’t want to waste time on unnecessary commands at startup and assembly? And there could be a lot of "What if…" like that.
The solution came by itself – we need an automation, a project builder. On Linux, of course. Googling, I found a lot of project assemblers… For one language or one technology. There is nothing really universal – I prescribe commands and it runs them when I need it. There is a cmake, but I didn’t use it because I have a better solution.)
At this point, the first bike schematic was created. At first I wrote all the commands into a file, but at the slightest change it took a long time to restart everything – it was annoying. At first, I put up with it. Then I wanted the script to have options, wrote variables on the first lines and described the algorithm of their changes through arguments to run the script. Then I wanted to not execute some commands if they’re not needed, so I made check functions. Then I had the idea to separate the commands and combine some of them with each other.
I called the combined commands "target". The nameof the target is sent to the script, and then it is executed. It turned out that some targets are unable to execute without executing other targets – so a hierarchy appeared. Then the command checker turned into a target checker. Then we wanted to simplify the installation of packages, and the "package" entity was created.
All in all, I could describe the development process at length – it’s probably boring.


The final working version is a 400line bash script which I named xGod. I named it that way because this file has become indispensable for my work, like air.
How xGod works:
Runs from the console – bash ./xgod build.xgrun
build.xg – this is the build file with all the goals and additional functions
run – is the goal to be accomplished
What does it consist of build.xg :
1. of the usual bash strings – they are executed sequentially as the file is read
2. from targets
For example :

target syncdb: virtualenv createmysqlusersource "$projectpath/venv/bin/activate"python3 "$projectpath/manage.py" makemigrationspython3 "$projectpath/manage.py" migratedeactivate

syncdb – target name; virtualenv createmysqluser – these are the goals that must be met before the goal can be met syncdb , the so called dependencies; everything else is just normal bash code that reaches the goal itself.
3. Packages :
For example :

packagegunicorn: pythonall:name: python3-gunicorn

gunicorn – package name (or target, because this is the same target for the script); python – dependency; all – is the name of the distribution to which the nested settings apply, all means that these settings apply to all distributions without exception, currently only debian and ubuntu are supported because I haven’t worked with the others; name – is the name of the package used for installation.
4. Verification Functions :
For example :

check syncdb()# any codereturn 1# or return 0endcheck

The check function allows you to check if the target should be executed syncdb or not. It is saved and executed as a normal function, it returns 1 (if the goal is to be executed) or 0 (if the goal is not to be met)
An extension support system was also written. Goals package are actually extensions. The syntax of the extensions is not much different from the syntax of the assembly files, it may contain :
Common bash commands
2. must be an action function.
For example :

action# any code with $1endaction

This function takes a target name as input and executes it according to its own rules. It can get all the target internals from the variable ${TARGETS[$1]}
3. Target verification function
For example :

check# any code with $1return 1 # or return 0endcheck

Also receives a target name as input and checks to see if it should be executed. If it should, it is obliged to return 1 and if not, then 0

More applications

The uses for this script can be larger than just building and running projects from a zero state machine. For example, I have my own set of packages that I want to see every time I install the system. Each time new distributions change the set of standard packages, so after installation I don’t know if those packages are on the system or not. Of course I can find out, but I’m lazy. It is much easier to type all the necessary packages into a script and run a single command to install them. The ones that are already in the system, it will skip, and those that are not – will install. It’s easy.
As a consequence of this use of the script, its main condition was that it had minimal dependencies to run. This is why it is written in bash instead of Python or C++ so that it can be run from any Linux environment without any extra steps. The only disadvantage is that bash must be at least version 4, because associative arrays are not supported there.
I will leave a link to the code here