From Mandriva Community Wiki
On this page I post my proposal to the structure of the build nodes, as how they should be configured. Please, feel free to send me any comments or even do changes on this page. Marcelo.
Definition of Build Node
A Build Node is all nodes that are intended for (remote) package compilations and software debugging.
- Provide a generic environment for quick packaging across all supported distros. (like current cooker chroot)
- Provide a way to build packages under clean chroots
- Provide a way to build clean chroots for generic purpose
- Provide resources for the official package builds
- Provide a stable icecream setup
- Provide a local http proxy/cache
Some nodes may perform special tasks, like:
- Rebuilding dkms packages (currently n3 and seggie afair)
- Providing disk space to the cluster
- All Build Nodes are monitored via snmp on the main system.
As we are currently using on build1 (Curitiba) and it proved to be a good layout, we should have:
- one partition for /boot
- one partition for /
- one partition for /chroots
- one partition for /home
- one partition for swap
That means: /, /boot and /chroots are currently under raid1 (mirroring) and /home under raid0 (stripping). All data inside the home partition is mostly from svn checkouts and if lost it would loose just a few hours, if lost, at the same time this will almost duplicate our free space on the nodes. Swap can be on raid1 too.
We should avoid duing chaining chroot calls when they are not needed. As iurt is not always the fastest way to test a build for an older distro than cooker, we should provide open chroots just like the cooker ones, chrooted from the base system, achieving this layout:
- main system (ssh -p 12)
- cooker chroot (ssh -p 22)
- packager iurt builds (for all distros)
- 2007.1 chroot (ssh -p 20071)
- 2008.0 chroot (ssh -p 20080)
- iurt service (ssh -p 32)
- official iurt builds (for all distros)
- cooker chroot (ssh -p 22)
Currently we have:
[mrl@n5 ~]$ sudo du -shx / 8.7G /
So we will be using a bit more of disk space here but as /home will be on raid0, we will have a nice free space to borrow from there.
We have on i586 nodes 2*250G. By using 5G for /boot and / and 4G for swap, we still have 482G remaining. Using more 10G per distro and assuming 3 distros, we will have 422G free to /home, against the current 203G.
- Isolation between cluster services and packagers system: packagers are not directly affected by the changes in the cluster tools and the cluster is not affected by the packages, like filling the disk space and so on.
- Enhance stability of the cluster, as the official service will be executed from a stable system.
- Better disk space usage.
- Better system monitoring, as snmpd will be executed in the main system and a) will be a stable one and b) /etc/mtab will always be updated automatically.
- Icecream will probably be more stable.
- Possibility to provide chroots for older distros.
- Faster builds for users, as writing at raid0 is twice as fast as raid1.
- Allow us to define three (separate) admin teams: a) main system one, b) chroots one and c) iurt/official builds one.
- Less network traffic due to local cache in real time (squid) and less load due to that on the storage servers (kenobi and n3).
- Easier to maintain.