Main / Initd
Buildroot recommends three possible init systems: I found Busybox documentation stating Note, BusyBox init doesn't support runlevels. The runlevels field is completely ignored by BusyBox init. If you want runlevels, use sysvinit. Sys V Init and Systemd/etc/init.d/ contains the scripts that run on bootup. Order of execution is determined by name (numbered in order). S prefix for startup files, K for shutdown. This is the old Sys V Init way of doing things, and systemd will ignore the order and instead launch all as soon as dependencies are met. /etc/rc.d defines which services to run on which run level. With sysvinit, any service that you start on boot with an S* script, and you want it to receive a SIGTERM on shutdown so that it can gracefully exit, you'll have to include a K* script as well. Just put the same symlinks to the init.d service functions in rc0.d/ and rc6.d/. Note that killall5 is lying and apparently does NOT send a SIGTERM to ALL processes, which is why you have to do it this way. Also keep in mind that if you kill the boot launch instance and then restart the process, it will NOT get the SIGTERM as the original would. You can "install" a service with the update-rc.d command and this will automatically generate the symbolic links in /etc/rcS.d/ that point to /etc/init.d/. You can view the dependencies of the services by looking at the text file /etc/init.d/.depend.boot You can force a runlevel transition with telinit <level>. The runlevel command will give you previous and current runlevel. One of the things the scripts could do on boot is mount /proc. rc is the script that starts/stops services when the runlevel changes (/etc/init.d/rc). I suppose it exits after runlevel transitions are complete. What to do if this happens? systemctl is the command line tool for managing systemd functions. initramfsIt would seem that initramfs and initrd are two different methods of implementing the concept known as ramdisk. Great writeups: initrd (or initial ramdisk) is considered to be the old way. ramfs is actually a simple file system type. MemoryThe kernel creates (allocates) some hash tables on start: < PID hash table entries: 4096 (order: 2, 16384 bytes) < Dentry cache hash table entries: 131072 (order: 7, 524288 bytes) < Inode-cache hash table entries: 65536 (order: 6, 262144 bytes) < Memory: 967964K/1032188K available (6144K kernel code, 238K rwdata, 1560K rodata, \ 29696K init, 153K bss, 47840K reserved, 16384K cma-reserved, 245760K highmem) The entry counts are dependent on available memory. I've seen on sister systems with different memory amounts that the entry count can double if the memory doubles. NFShttps://www.kernel.org/doc/Documentation/filesystems/nfs/nfsroot.txt On the server, open up access using /etc/exports with <host RFS location> <client IP>(rw) and start the nfs-kernel-server service. Also add the lines portmap: <client IP> lockd: 192.168.X.X rquotad: 192.168.X.X mountd: 192.168.X.X statd: 192.168.X.X to the /etc/hosts.allow file. On the client, create a uboot ENV bootargs=console=ttymxc1,115200 ip=${ipaddr} root=/dev/nfs rw nfsroot=${serverip}:/tftpboot/rootfs,tcp nfsrootdebug You may need to add a ,tcp to the end of the nfsroot argument in uboot. notice: updated Uboot parameters as of 2016 http://www.denx.de/wiki/view/DULG/LinuxNfsRoot udevdKernel uevents and udev udev is a replacement of devFS and runs in user space instead of kernel space. It allows access dynamically to devices using vendor and product IDs. It is started on init with /etc/rcS.d/udev and the config file is at /etc/udev/udev.conf. The required device information is exported by the sysfs file system. For every device the kernel has detected and initialized, a directory with the device name is created. It contains attribute files with device-specific properties. Every time a device is added or removed, the kernel sends a uevent to notify udev of the change. The udev daemon reads and parses all provided rules from the /etc/udev/rules.d/*.rules files once at start-up and keeps them in memory. If rules files are changed, added, or removed, the daemon receives an event and updates the in-memory representation of the rules. Every received event is matched against the set of provides rules. The rules can add or change event environment keys, request a specific name for the device node to create, add symlinks pointing to the node, or add programs to run after the device node is created. The driver core uevents are received from a kernel netlink socket. Timesntp vs. ntpdate vs. ntpd "NTPDATE corrects the system time instantaneously, which can cause problems with some software (e.g. destroying a session which now appears old). NTPD intentionally corrects the system time slowly, avoiding that problem. You can add the -g switch when starting NTPD to allow NTPD to make the first time update a big one which is more or less equivalent to running ntpdate once before starting NTPD, which at one time was recommended practice. " "NTP slowly corrects your systems time. Be patient! A simple test is to change your system clock by 10 minutes before you go to bed and then check it when you get up. The time should be correct." "While a full featured NTP server or -client reaches a very high level of accuracy and avoids abrupt timesteps as much as possible by using different mathematical and statistical methods and smooth clock speed adjustments, SNTP can only be recommended for simple applications, where the requirements for accuracy and reliability are not too demanding. By disregarding drift values and using simplified ways of system clock adjustment methods (often simple time stepping), SNTP achieves only a low quality time synchronization when compared with a full NTP implementation." "The main differences between NTP and SNTP are contained within the program itself. NTP has developed many complex algorithms that contain calibration techniques aimed at maintaining accurate time. It allows multiple time references to be monitored with selection algorithms to ascertain which is the most stable. Additionally, NTP adjusts the system time of a computer with very small skewed adjustments of the system clock in an attempt to make time corrections seamless. The system clock is speeded-up or slowed slightly to account for small time adjustments. SNTP adopts a much simpler approach. Many of the complexities of the NTP algorithm are removed. Rather than skewing time, many SNTP clients step time. This is fine for many applications where a simple time-stamp is required. Additionally, SNTP lacks the ability to monitor and filter multiple NTP servers." The normal operation of ntpd is to gradually sync the time in small increments. It can days... a long time to catch up if the time is way off. The -g option is supposed to make ntpd do one large step adjustment up front when it launches, and then keep running and maintain sync. The -g -q -x combo is often used as an ntpdate replacement, to run once and match the time then quit. Virtual File SystemsThe kernel mounts several virtual file systems on init. For example, with Angstrom there's a /etc/init.d/sysfs.sh init script which says "Mount initial set of virtual filesystems the kernel provides and that are required by everything." and mounts /proc, /sysfs, and /debugfs. All three are "types" of file systems known to the mount tool. Environment Variables in /etc/default/rcSSULOGIN actually gives you a maintenance mode login prior to the rest of the init scripts being able to run. Full shell commands are not automatically available, I guess so you can't reboot but if you 'exit' the boot process will continue. Which init does my system use?On Linux, the symlink /proc/<pid>/exe has the path of the executable. Use the command readlink -f /proc/<pid>/exe to get the value. For init, PID is 1. What is /etc/motd?Login message: https://en.wikipedia.org/wiki/Motd_ How to make embedded Linux user profile edits similar to .bashrc commands?Check out the /etc/profile script. Optimizationshttps://free-electrons.com/pub/conferences/2014/elc/opdenacker-boot-time/opdenacker-boot-time.pdf Using bootchartdYou can get some idea of what processes are coming up and see their CPU and I/O utilization on boot with bootchart. Busybox includes bootchartd, which is what runs on the target. You'll need to enable it with some additions to the kernel command line. The Busybox version can make use of a config file at /etc/bootchartd.conf, though the only supported option as of 2017 is SAMPLE_PERIOD. It produces a /var/log/bootlog.tgz which can be copied over to your host for analysis. Don't use the apt version that ubuntu can auto-install; instead go to https://github.com/xrmx/bootchart to get bootchart2. Parameters |