1
0
Fork 1
mirror of https://github.com/NixOS/nixpkgs.git synced 2025-06-09 09:36:20 +09:00

nixos/doc: re-format

This commit is contained in:
Jan Tojnar 2019-09-18 22:13:35 +02:00
parent 83c2ad80ca
commit ea6e8775bd
No known key found for this signature in database
GPG key ID: 7FAB2A15F7A607A4
124 changed files with 1946 additions and 5715 deletions

View file

@ -6,12 +6,7 @@
<title>Boot Problems</title>
<para>
If NixOS fails to boot, there are a number of kernel command line parameters
that may help you to identify or fix the issue. You can add these parameters
in the GRUB boot menu by pressing “e” to modify the selected boot entry
and editing the line starting with <literal>linux</literal>. The following
are some useful kernel command line parameters that are recognised by the
NixOS boot scripts or by systemd:
If NixOS fails to boot, there are a number of kernel command line parameters that may help you to identify or fix the issue. You can add these parameters in the GRUB boot menu by pressing “e” to modify the selected boot entry and editing the line starting with <literal>linux</literal>. The following are some useful kernel command line parameters that are recognised by the NixOS boot scripts or by systemd:
<variablelist>
<varlistentry>
<term>
@ -19,9 +14,7 @@
</term>
<listitem>
<para>
Start a root shell if something goes wrong in stage 1 of the boot process
(the initial ramdisk). This is disabled by default because there is no
authentication for the root shell.
Start a root shell if something goes wrong in stage 1 of the boot process (the initial ramdisk). This is disabled by default because there is no authentication for the root shell.
</para>
</listitem>
</varlistentry>
@ -31,10 +24,7 @@
</term>
<listitem>
<para>
Start an interactive shell in stage 1 before anything useful has been
done. That is, no modules have been loaded and no file systems have been
mounted, except for <filename>/proc</filename> and
<filename>/sys</filename>.
Start an interactive shell in stage 1 before anything useful has been done. That is, no modules have been loaded and no file systems have been mounted, except for <filename>/proc</filename> and <filename>/sys</filename>.
</para>
</listitem>
</varlistentry>
@ -54,11 +44,7 @@
</term>
<listitem>
<para>
Boot into rescue mode (a.k.a. single user mode). This will cause systemd
to start nothing but the unit <literal>rescue.target</literal>, which
runs <command>sulogin</command> to prompt for the root password and start
a root login shell. Exiting the shell causes the system to continue with
the normal boot process.
Boot into rescue mode (a.k.a. single user mode). This will cause systemd to start nothing but the unit <literal>rescue.target</literal>, which runs <command>sulogin</command> to prompt for the root password and start a root login shell. Exiting the shell causes the system to continue with the normal boot process.
</para>
</listitem>
</varlistentry>
@ -68,8 +54,7 @@
</term>
<listitem>
<para>
Make systemd very verbose and send log messages to the console instead of
the journal.
Make systemd very verbose and send log messages to the console instead of the journal.
</para>
</listitem>
</varlistentry>
@ -80,11 +65,6 @@
</para>
<para>
If no login prompts or X11 login screens appear (e.g. due to hanging
dependencies), you can press Alt+ArrowUp. If youre lucky, this will start
rescue mode (described above). (Also note that since most units have a
90-second timeout before systemd gives up on them, the
<command>agetty</command> login prompts should appear eventually unless
something is very wrong.)
If no login prompts or X11 login screens appear (e.g. due to hanging dependencies), you can press Alt+ArrowUp. If youre lucky, this will start rescue mode (described above). (Also note that since most units have a 90-second timeout before systemd gives up on them, the <command>agetty</command> login prompts should appear eventually unless something is very wrong.)
</para>
</section>

View file

@ -5,31 +5,22 @@
xml:id="sec-nix-gc">
<title>Cleaning the Nix Store</title>
<para>
Nix has a purely functional model, meaning that packages are never upgraded
in place. Instead new versions of packages end up in a different location in
the Nix store (<filename>/nix/store</filename>). You should periodically run
Nixs <emphasis>garbage collector</emphasis> to remove old, unreferenced
packages. This is easy:
Nix has a purely functional model, meaning that packages are never upgraded in place. Instead new versions of packages end up in a different location in the Nix store (<filename>/nix/store</filename>). You should periodically run Nixs <emphasis>garbage collector</emphasis> to remove old, unreferenced packages. This is easy:
<screen>
<prompt>$ </prompt>nix-collect-garbage
</screen>
Alternatively, you can use a systemd unit that does the same in the
background:
Alternatively, you can use a systemd unit that does the same in the background:
<screen>
<prompt># </prompt>systemctl start nix-gc.service
</screen>
You can tell NixOS in <filename>configuration.nix</filename> to run this unit
automatically at certain points in time, for instance, every night at 03:15:
You can tell NixOS in <filename>configuration.nix</filename> to run this unit automatically at certain points in time, for instance, every night at 03:15:
<programlisting>
<xref linkend="opt-nix.gc.automatic"/> = true;
<xref linkend="opt-nix.gc.dates"/> = "03:15";
</programlisting>
</para>
<para>
The commands above do not remove garbage collector roots, such as old system
configurations. Thus they do not remove the ability to roll back to previous
configurations. The following command deletes old roots, removing the ability
to roll back to them:
The commands above do not remove garbage collector roots, such as old system configurations. Thus they do not remove the ability to roll back to previous configurations. The following command deletes old roots, removing the ability to roll back to them:
<screen>
<prompt>$ </prompt>nix-collect-garbage -d
</screen>
@ -37,27 +28,20 @@
<screen>
<prompt>$ </prompt>nix-env -p /nix/var/nix/profiles/per-user/eelco/profile --delete-generations old
</screen>
Note that NixOS system configurations are stored in the profile
<filename>/nix/var/nix/profiles/system</filename>.
Note that NixOS system configurations are stored in the profile <filename>/nix/var/nix/profiles/system</filename>.
</para>
<para>
Another way to reclaim disk space (often as much as 40% of the size of the
Nix store) is to run Nixs store optimiser, which seeks out identical files
in the store and replaces them with hard links to a single copy.
Another way to reclaim disk space (often as much as 40% of the size of the Nix store) is to run Nixs store optimiser, which seeks out identical files in the store and replaces them with hard links to a single copy.
<screen>
<prompt>$ </prompt>nix-store --optimise
</screen>
Since this command needs to read the entire Nix store, it can take quite a
while to finish.
Since this command needs to read the entire Nix store, it can take quite a while to finish.
</para>
<section xml:id="sect-nixos-gc-boot-entries">
<title>NixOS Boot Entries</title>
<para>
If your <filename>/boot</filename> partition runs out of space, after
clearing old profiles you must rebuild your system with
<literal>nixos-rebuild</literal> to update the <filename>/boot</filename>
partition and clear space.
If your <filename>/boot</filename> partition runs out of space, after clearing old profiles you must rebuild your system with <literal>nixos-rebuild</literal> to update the <filename>/boot</filename> partition and clear space.
</para>
</section>
</chapter>

View file

@ -6,10 +6,7 @@
<title>Container Networking</title>
<para>
When you create a container using <literal>nixos-container create</literal>,
it gets it own private IPv4 address in the range
<literal>10.233.0.0/16</literal>. You can get the containers IPv4 address
as follows:
When you create a container using <literal>nixos-container create</literal>, it gets it own private IPv4 address in the range <literal>10.233.0.0/16</literal>. You can get the containers IPv4 address as follows:
<screen>
<prompt># </prompt>nixos-container show-ip foo
10.233.4.2
@ -20,34 +17,21 @@
</para>
<para>
Networking is implemented using a pair of virtual Ethernet devices. The
network interface in the container is called <literal>eth0</literal>, while
the matching interface in the host is called
<literal>ve-<replaceable>container-name</replaceable></literal> (e.g.,
<literal>ve-foo</literal>). The container has its own network namespace and
the <literal>CAP_NET_ADMIN</literal> capability, so it can perform arbitrary
network configuration such as setting up firewall rules, without affecting or
having access to the hosts network.
Networking is implemented using a pair of virtual Ethernet devices. The network interface in the container is called <literal>eth0</literal>, while the matching interface in the host is called <literal>ve-<replaceable>container-name</replaceable></literal> (e.g., <literal>ve-foo</literal>). The container has its own network namespace and the <literal>CAP_NET_ADMIN</literal> capability, so it can perform arbitrary network configuration such as setting up firewall rules, without affecting or having access to the hosts network.
</para>
<para>
By default, containers cannot talk to the outside network. If you want that,
you should set up Network Address Translation (NAT) rules on the host to
rewrite container traffic to use your external IP address. This can be
accomplished using the following configuration on the host:
By default, containers cannot talk to the outside network. If you want that, you should set up Network Address Translation (NAT) rules on the host to rewrite container traffic to use your external IP address. This can be accomplished using the following configuration on the host:
<programlisting>
<xref linkend="opt-networking.nat.enable"/> = true;
<xref linkend="opt-networking.nat.internalInterfaces"/> = ["ve-+"];
<xref linkend="opt-networking.nat.externalInterface"/> = "eth0";
</programlisting>
where <literal>eth0</literal> should be replaced with the desired external
interface. Note that <literal>ve-+</literal> is a wildcard that matches all
container interfaces.
where <literal>eth0</literal> should be replaced with the desired external interface. Note that <literal>ve-+</literal> is a wildcard that matches all container interfaces.
</para>
<para>
If you are using Network Manager, you need to explicitly prevent it from
managing container interfaces:
If you are using Network Manager, you need to explicitly prevent it from managing container interfaces:
<programlisting>
networking.networkmanager.unmanaged = [ "interface-name:ve-*" ];
</programlisting>

View file

@ -5,28 +5,15 @@
xml:id="ch-containers">
<title>Container Management</title>
<para>
NixOS allows you to easily run other NixOS instances as
<emphasis>containers</emphasis>. Containers are a light-weight approach to
virtualisation that runs software in the container at the same speed as in
the host system. NixOS containers share the Nix store of the host, making
container creation very efficient.
NixOS allows you to easily run other NixOS instances as <emphasis>containers</emphasis>. Containers are a light-weight approach to virtualisation that runs software in the container at the same speed as in the host system. NixOS containers share the Nix store of the host, making container creation very efficient.
</para>
<warning>
<para>
Currently, NixOS containers are not perfectly isolated from the host system.
This means that a user with root access to the container can do things that
affect the host. So you should not give container root access to untrusted
users.
Currently, NixOS containers are not perfectly isolated from the host system. This means that a user with root access to the container can do things that affect the host. So you should not give container root access to untrusted users.
</para>
</warning>
<para>
NixOS containers can be created in two ways: imperatively, using the command
<command>nixos-container</command>, and declaratively, by specifying them in
your <filename>configuration.nix</filename>. The declarative approach implies
that containers get upgraded along with your host system when you run
<command>nixos-rebuild</command>, which is often not what you want. By
contrast, in the imperative approach, containers are configured and updated
independently from the host system.
NixOS containers can be created in two ways: imperatively, using the command <command>nixos-container</command>, and declaratively, by specifying them in your <filename>configuration.nix</filename>. The declarative approach implies that containers get upgraded along with your host system when you run <command>nixos-rebuild</command>, which is often not what you want. By contrast, in the imperative approach, containers are configured and updated independently from the host system.
</para>
<xi:include href="imperative-containers.xml" />
<xi:include href="declarative-containers.xml" />

View file

@ -5,16 +5,10 @@
xml:id="sec-cgroups">
<title>Control Groups</title>
<para>
To keep track of the processes in a running system, systemd uses
<emphasis>control groups</emphasis> (cgroups). A control group is a set of
processes used to allocate resources such as CPU, memory or I/O bandwidth.
There can be multiple control group hierarchies, allowing each kind of
resource to be managed independently.
To keep track of the processes in a running system, systemd uses <emphasis>control groups</emphasis> (cgroups). A control group is a set of processes used to allocate resources such as CPU, memory or I/O bandwidth. There can be multiple control group hierarchies, allowing each kind of resource to be managed independently.
</para>
<para>
The command <command>systemd-cgls</command> lists all control groups in the
<literal>systemd</literal> hierarchy, which is what systemd uses to keep
track of the processes belonging to each service or user session:
The command <command>systemd-cgls</command> lists all control groups in the <literal>systemd</literal> hierarchy, which is what systemd uses to keep track of the processes belonging to each service or user session:
<screen>
<prompt>$ </prompt>systemd-cgls
├─user
@ -32,34 +26,19 @@
│ └─2376 dhcpcd --config /nix/store/f8dif8dsi2yaa70n03xir8r653776ka6-dhcpcd.conf
└─ <replaceable>...</replaceable>
</screen>
Similarly, <command>systemd-cgls cpu</command> shows the cgroups in the CPU
hierarchy, which allows per-cgroup CPU scheduling priorities. By default,
every systemd service gets its own CPU cgroup, while all user sessions are in
the top-level CPU cgroup. This ensures, for instance, that a thousand
run-away processes in the <literal>httpd.service</literal> cgroup cannot
starve the CPU for one process in the <literal>postgresql.service</literal>
cgroup. (By contrast, it they were in the same cgroup, then the PostgreSQL
process would get 1/1001 of the cgroups CPU time.) You can limit a
services CPU share in <filename>configuration.nix</filename>:
Similarly, <command>systemd-cgls cpu</command> shows the cgroups in the CPU hierarchy, which allows per-cgroup CPU scheduling priorities. By default, every systemd service gets its own CPU cgroup, while all user sessions are in the top-level CPU cgroup. This ensures, for instance, that a thousand run-away processes in the <literal>httpd.service</literal> cgroup cannot starve the CPU for one process in the <literal>postgresql.service</literal> cgroup. (By contrast, it they were in the same cgroup, then the PostgreSQL process would get 1/1001 of the cgroups CPU time.) You can limit a services CPU share in <filename>configuration.nix</filename>:
<programlisting>
<link linkend="opt-systemd.services._name_.serviceConfig">systemd.services.httpd.serviceConfig</link>.CPUShares = 512;
</programlisting>
By default, every cgroup has 1024 CPU shares, so this will halve the CPU
allocation of the <literal>httpd.service</literal> cgroup.
By default, every cgroup has 1024 CPU shares, so this will halve the CPU allocation of the <literal>httpd.service</literal> cgroup.
</para>
<para>
There also is a <literal>memory</literal> hierarchy that controls memory
allocation limits; by default, all processes are in the top-level cgroup, so
any service or session can exhaust all available memory. Per-cgroup memory
limits can be specified in <filename>configuration.nix</filename>; for
instance, to limit <literal>httpd.service</literal> to 512 MiB of RAM
(excluding swap):
There also is a <literal>memory</literal> hierarchy that controls memory allocation limits; by default, all processes are in the top-level cgroup, so any service or session can exhaust all available memory. Per-cgroup memory limits can be specified in <filename>configuration.nix</filename>; for instance, to limit <literal>httpd.service</literal> to 512 MiB of RAM (excluding swap):
<programlisting>
<link linkend="opt-systemd.services._name_.serviceConfig">systemd.services.httpd.serviceConfig</link>.MemoryLimit = "512M";
</programlisting>
</para>
<para>
The command <command>systemd-cgtop</command> shows a continuously updated
list of all cgroups with their CPU and memory usage.
The command <command>systemd-cgtop</command> shows a continuously updated list of all cgroups with their CPU and memory usage.
</para>
</chapter>

View file

@ -6,10 +6,7 @@
<title>Declarative Container Specification</title>
<para>
You can also specify containers and their configuration in the hosts
<filename>configuration.nix</filename>. For example, the following specifies
that there shall be a container named <literal>database</literal> running
PostgreSQL:
You can also specify containers and their configuration in the hosts <filename>configuration.nix</filename>. For example, the following specifies that there shall be a container named <literal>database</literal> running PostgreSQL:
<programlisting>
containers.database =
{ config =
@ -19,18 +16,11 @@ containers.database =
};
};
</programlisting>
If you run <literal>nixos-rebuild switch</literal>, the container will be
built. If the container was already running, it will be updated in place,
without rebooting. The container can be configured to start automatically by
setting <literal>containers.database.autoStart = true</literal> in its
configuration.
If you run <literal>nixos-rebuild switch</literal>, the container will be built. If the container was already running, it will be updated in place, without rebooting. The container can be configured to start automatically by setting <literal>containers.database.autoStart = true</literal> in its configuration.
</para>
<para>
By default, declarative containers share the network namespace of the host,
meaning that they can listen on (privileged) ports. However, they cannot
change the network configuration. You can give a container its own network as
follows:
By default, declarative containers share the network namespace of the host, meaning that they can listen on (privileged) ports. However, they cannot change the network configuration. You can give a container its own network as follows:
<programlisting>
containers.database = {
<link linkend="opt-containers._name_.privateNetwork">privateNetwork</link> = true;
@ -38,23 +28,14 @@ containers.database = {
<link linkend="opt-containers._name_.localAddress">localAddress</link> = "192.168.100.11";
};
</programlisting>
This gives the container a private virtual Ethernet interface with IP address
<literal>192.168.100.11</literal>, which is hooked up to a virtual Ethernet
interface on the host with IP address <literal>192.168.100.10</literal>. (See
the next section for details on container networking.)
This gives the container a private virtual Ethernet interface with IP address <literal>192.168.100.11</literal>, which is hooked up to a virtual Ethernet interface on the host with IP address <literal>192.168.100.10</literal>. (See the next section for details on container networking.)
</para>
<para>
To disable the container, just remove it from
<filename>configuration.nix</filename> and run <literal>nixos-rebuild
switch</literal>. Note that this will not delete the root directory of the
container in <literal>/var/lib/containers</literal>. Containers can be
destroyed using the imperative method: <literal>nixos-container destroy
foo</literal>.
To disable the container, just remove it from <filename>configuration.nix</filename> and run <literal>nixos-rebuild switch</literal>. Note that this will not delete the root directory of the container in <literal>/var/lib/containers</literal>. Containers can be destroyed using the imperative method: <literal>nixos-container destroy foo</literal>.
</para>
<para>
Declarative containers can be started and stopped using the corresponding
systemd service, e.g. <literal>systemctl start container@database</literal>.
Declarative containers can be started and stopped using the corresponding systemd service, e.g. <literal>systemctl start container@database</literal>.
</para>
</section>

View file

@ -6,9 +6,7 @@
<title>Imperative Container Management</title>
<para>
Well cover imperative container management using
<command>nixos-container</command> first. Be aware that container management
is currently only possible as <literal>root</literal>.
Well cover imperative container management using <command>nixos-container</command> first. Be aware that container management is currently only possible as <literal>root</literal>.
</para>
<para>
@ -16,23 +14,14 @@
<screen>
# nixos-container create foo
</screen>
This creates the containers root directory in
<filename>/var/lib/containers/foo</filename> and a small configuration file
in <filename>/etc/containers/foo.conf</filename>. It also builds the
containers initial system configuration and stores it in
<filename>/nix/var/nix/profiles/per-container/foo/system</filename>. You can
modify the initial configuration of the container on the command line. For
instance, to create a container that has <command>sshd</command> running,
with the given public key for <literal>root</literal>:
This creates the containers root directory in <filename>/var/lib/containers/foo</filename> and a small configuration file in <filename>/etc/containers/foo.conf</filename>. It also builds the containers initial system configuration and stores it in <filename>/nix/var/nix/profiles/per-container/foo/system</filename>. You can modify the initial configuration of the container on the command line. For instance, to create a container that has <command>sshd</command> running, with the given public key for <literal>root</literal>:
<screen>
# nixos-container create foo --config '
<xref linkend="opt-services.openssh.enable"/> = true;
<link linkend="opt-users.users._name__.openssh.authorizedKeys.keys">users.users.root.openssh.authorizedKeys.keys</link> = ["ssh-dss AAAAB3N…"];
'
</screen>
By default the next free address in the <literal>10.233.0.0/16</literal> subnet will be chosen
as container IP. This behavior can be altered by setting <literal>--host-address</literal> and
<literal>--local-address</literal>:
By default the next free address in the <literal>10.233.0.0/16</literal> subnet will be chosen as container IP. This behavior can be altered by setting <literal>--host-address</literal> and <literal>--local-address</literal>:
<screen>
# nixos-container create test --config-file test-container.nix \
--local-address 10.235.1.2 --host-address 10.235.1.1
@ -44,35 +33,25 @@
<screen>
# nixos-container start foo
</screen>
This command will return as soon as the container has booted and has reached
<literal>multi-user.target</literal>. On the host, the container runs within
a systemd unit called
<literal>container@<replaceable>container-name</replaceable>.service</literal>.
Thus, if something went wrong, you can get status info using
<command>systemctl</command>:
This command will return as soon as the container has booted and has reached <literal>multi-user.target</literal>. On the host, the container runs within a systemd unit called <literal>container@<replaceable>container-name</replaceable>.service</literal>. Thus, if something went wrong, you can get status info using <command>systemctl</command>:
<screen>
# systemctl status container@foo
</screen>
</para>
<para>
If the container has started successfully, you can log in as root using the
<command>root-login</command> operation:
If the container has started successfully, you can log in as root using the <command>root-login</command> operation:
<screen>
# nixos-container root-login foo
[root@foo:~]#
</screen>
Note that only root on the host can do this (since there is no
authentication). You can also get a regular login prompt using the
<command>login</command> operation, which is available to all users on the
host:
Note that only root on the host can do this (since there is no authentication). You can also get a regular login prompt using the <command>login</command> operation, which is available to all users on the host:
<screen>
# nixos-container login foo
foo login: alice
Password: ***
</screen>
With <command>nixos-container run</command>, you can execute arbitrary
commands in the container:
With <command>nixos-container run</command>, you can execute arbitrary commands in the container:
<screen>
# nixos-container run foo -- uname -a
Linux foo 3.4.82 #1-NixOS SMP Thu Mar 20 14:44:05 UTC 2014 x86_64 GNU/Linux
@ -80,15 +59,11 @@ Linux foo 3.4.82 #1-NixOS SMP Thu Mar 20 14:44:05 UTC 2014 x86_64 GNU/Linux
</para>
<para>
There are several ways to change the configuration of the container. First,
on the host, you can edit
<literal>/var/lib/container/<replaceable>name</replaceable>/etc/nixos/configuration.nix</literal>,
and run
There are several ways to change the configuration of the container. First, on the host, you can edit <literal>/var/lib/container/<replaceable>name</replaceable>/etc/nixos/configuration.nix</literal>, and run
<screen>
# nixos-container update foo
</screen>
This will build and activate the new configuration. You can also specify a
new configuration on the command line:
This will build and activate the new configuration. You can also specify a new configuration on the command line:
<screen>
# nixos-container update foo --config '
<xref linkend="opt-services.httpd.enable"/> = true;
@ -99,23 +74,15 @@ Linux foo 3.4.82 #1-NixOS SMP Thu Mar 20 14:44:05 UTC 2014 x86_64 GNU/Linux
# curl http://$(nixos-container show-ip foo)/
&lt;!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">…
</screen>
However, note that this will overwrite the containers
<filename>/etc/nixos/configuration.nix</filename>.
However, note that this will overwrite the containers <filename>/etc/nixos/configuration.nix</filename>.
</para>
<para>
Alternatively, you can change the configuration from within the container
itself by running <command>nixos-rebuild switch</command> inside the
container. Note that the container by default does not have a copy of the
NixOS channel, so you should run <command>nix-channel --update</command>
first.
Alternatively, you can change the configuration from within the container itself by running <command>nixos-rebuild switch</command> inside the container. Note that the container by default does not have a copy of the NixOS channel, so you should run <command>nix-channel --update</command> first.
</para>
<para>
Containers can be stopped and started using <literal>nixos-container
stop</literal> and <literal>nixos-container start</literal>, respectively, or
by using <command>systemctl</command> on the containers service unit. To
destroy a container, including its file system, do
Containers can be stopped and started using <literal>nixos-container stop</literal> and <literal>nixos-container start</literal>, respectively, or by using <command>systemctl</command> on the containers service unit. To destroy a container, including its file system, do
<screen>
# nixos-container destroy foo
</screen>

View file

@ -5,18 +5,11 @@
xml:id="sec-logging">
<title>Logging</title>
<para>
System-wide logging is provided by systemds <emphasis>journal</emphasis>,
which subsumes traditional logging daemons such as syslogd and klogd. Log
entries are kept in binary files in <filename>/var/log/journal/</filename>.
The command <literal>journalctl</literal> allows you to see the contents of
the journal. For example,
System-wide logging is provided by systemds <emphasis>journal</emphasis>, which subsumes traditional logging daemons such as syslogd and klogd. Log entries are kept in binary files in <filename>/var/log/journal/</filename>. The command <literal>journalctl</literal> allows you to see the contents of the journal. For example,
<screen>
<prompt>$ </prompt>journalctl -b
</screen>
shows all journal entries since the last reboot. (The output of
<command>journalctl</command> is piped into <command>less</command> by
default.) You can use various options and match operators to restrict output
to messages of interest. For instance, to get all messages from PostgreSQL:
shows all journal entries since the last reboot. (The output of <command>journalctl</command> is piped into <command>less</command> by default.) You can use various options and match operators to restrict output to messages of interest. For instance, to get all messages from PostgreSQL:
<screen>
<prompt>$ </prompt>journalctl -u postgresql.service
-- Logs begin at Mon, 2013-01-07 13:28:01 CET, end at Tue, 2013-01-08 01:09:57 CET. --
@ -26,8 +19,7 @@ Jan 07 15:44:14 hagbard postgres[2681]: [2-1] LOG: database system is shut down
Jan 07 15:45:10 hagbard postgres[2532]: [1-1] LOG: database system was shut down at 2013-01-07 15:44:14 CET
Jan 07 15:45:13 hagbard postgres[2500]: [1-1] LOG: database system is ready to accept connections
</screen>
Or to get all messages since the last reboot that have at least a
“critical” severity level:
Or to get all messages since the last reboot that have at least a “critical” severity level:
<screen>
<prompt>$ </prompt>journalctl -b -p crit
Dec 17 21:08:06 mandark sudo[3673]: pam_unix(sudo:auth): auth could not identify password for [alice]
@ -35,9 +27,6 @@ Dec 29 01:30:22 mandark kernel[6131]: [1053513.909444] CPU6: Core temperature ab
</screen>
</para>
<para>
The system journal is readable by root and by users in the
<literal>wheel</literal> and <literal>systemd-journal</literal> groups. All
users have a private journal that can be read using
<command>journalctl</command>.
The system journal is readable by root and by users in the <literal>wheel</literal> and <literal>systemd-journal</literal> groups. All users have a private journal that can be read using <command>journalctl</command>.
</para>
</chapter>

View file

@ -9,8 +9,6 @@
You can enter rescue mode by running:
<screen>
# systemctl rescue</screen>
This will eventually give you a single-user root shell. Systemd will stop
(almost) all system services. To get out of maintenance mode, just exit from
the rescue shell.
This will eventually give you a single-user root shell. Systemd will stop (almost) all system services. To get out of maintenance mode, just exit from the rescue shell.
</para>
</section>

View file

@ -6,20 +6,11 @@
<title>Network Problems</title>
<para>
Nix uses a so-called <emphasis>binary cache</emphasis> to optimise building a
package from source into downloading it as a pre-built binary. That is,
whenever a command like <command>nixos-rebuild</command> needs a path in the
Nix store, Nix will try to download that path from the Internet rather than
build it from source. The default binary cache is
<uri>https://cache.nixos.org/</uri>. If this cache is unreachable, Nix
operations may take a long time due to HTTP connection timeouts. You can
disable the use of the binary cache by adding <option>--option
use-binary-caches false</option>, e.g.
Nix uses a so-called <emphasis>binary cache</emphasis> to optimise building a package from source into downloading it as a pre-built binary. That is, whenever a command like <command>nixos-rebuild</command> needs a path in the Nix store, Nix will try to download that path from the Internet rather than build it from source. The default binary cache is <uri>https://cache.nixos.org/</uri>. If this cache is unreachable, Nix operations may take a long time due to HTTP connection timeouts. You can disable the use of the binary cache by adding <option>--option use-binary-caches false</option>, e.g.
<screen>
# nixos-rebuild switch --option use-binary-caches false
</screen>
If you have an alternative binary cache at your disposal, you can use it
instead:
If you have an alternative binary cache at your disposal, you can use it instead:
<screen>
# nixos-rebuild switch --option binary-caches http://my-cache.example.org/
</screen>

View file

@ -16,20 +16,15 @@
<screen>
# reboot
</screen>
which is equivalent to <command>systemctl reboot</command>. Alternatively,
you can quickly reboot the system using <literal>kexec</literal>, which
bypasses the BIOS by directly loading the new kernel into memory:
which is equivalent to <command>systemctl reboot</command>. Alternatively, you can quickly reboot the system using <literal>kexec</literal>, which bypasses the BIOS by directly loading the new kernel into memory:
<screen>
# systemctl kexec
</screen>
</para>
<para>
The machine can be suspended to RAM (if supported) using <command>systemctl
suspend</command>, and suspended to disk using <command>systemctl
hibernate</command>.
The machine can be suspended to RAM (if supported) using <command>systemctl suspend</command>, and suspended to disk using <command>systemctl hibernate</command>.
</para>
<para>
These commands can be run by any user who is logged in locally, i.e. on a
virtual console or in X11; otherwise, the user is asked for authentication.
These commands can be run by any user who is logged in locally, i.e. on a virtual console or in X11; otherwise, the user is asked for authentication.
</para>
</chapter>

View file

@ -6,19 +6,11 @@
<title>Rolling Back Configuration Changes</title>
<para>
After running <command>nixos-rebuild</command> to switch to a new
configuration, you may find that the new configuration doesnt work very
well. In that case, there are several ways to return to a previous
configuration.
After running <command>nixos-rebuild</command> to switch to a new configuration, you may find that the new configuration doesnt work very well. In that case, there are several ways to return to a previous configuration.
</para>
<para>
First, the GRUB boot manager allows you to boot into any previous
configuration that hasnt been garbage-collected. These configurations can
be found under the GRUB submenu “NixOS - All configurations”. This is
especially useful if the new configuration fails to boot. After the system
has booted, you can make the selected configuration the default for
subsequent boots:
First, the GRUB boot manager allows you to boot into any previous configuration that hasnt been garbage-collected. These configurations can be found under the GRUB submenu “NixOS - All configurations”. This is especially useful if the new configuration fails to boot. After the system has booted, you can make the selected configuration the default for subsequent boots:
<screen>
# /run/current-system/bin/switch-to-configuration boot</screen>
</para>
@ -30,8 +22,7 @@
This is equivalent to running:
<screen>
# /nix/var/nix/profiles/system-<replaceable>N</replaceable>-link/bin/switch-to-configuration switch</screen>
where <replaceable>N</replaceable> is the number of the NixOS system
configuration. To get a list of the available configurations, do:
where <replaceable>N</replaceable> is the number of the NixOS system configuration. To get a list of the available configurations, do:
<screen>
<prompt>$ </prompt>ls -l /nix/var/nix/profiles/system-*-link
<replaceable>...</replaceable>

View file

@ -6,8 +6,7 @@
<title>Administration</title>
<partintro xml:id="ch-running-intro">
<para>
This chapter describes various aspects of managing a running NixOS system,
such as how to use the <command>systemd</command> service manager.
This chapter describes various aspects of managing a running NixOS system, such as how to use the <command>systemd</command> service manager.
</para>
</partintro>
<xi:include href="service-mgmt.xml" />

View file

@ -5,21 +5,10 @@
xml:id="sec-systemctl">
<title>Service Management</title>
<para>
In NixOS, all system services are started and monitored using the systemd
program. Systemd is the “init” process of the system (i.e. PID 1), the
parent of all other processes. It manages a set of so-called “units”,
which can be things like system services (programs), but also mount points,
swap files, devices, targets (groups of units) and more. Units can have
complex dependencies; for instance, one unit can require that another unit
must be successfully started before the first unit can be started. When the
system boots, it starts a unit named <literal>default.target</literal>; the
dependencies of this unit cause all system services to be started, file
systems to be mounted, swap files to be activated, and so on.
In NixOS, all system services are started and monitored using the systemd program. Systemd is the “init” process of the system (i.e. PID 1), the parent of all other processes. It manages a set of so-called “units”, which can be things like system services (programs), but also mount points, swap files, devices, targets (groups of units) and more. Units can have complex dependencies; for instance, one unit can require that another unit must be successfully started before the first unit can be started. When the system boots, it starts a unit named <literal>default.target</literal>; the dependencies of this unit cause all system services to be started, file systems to be mounted, swap files to be activated, and so on.
</para>
<para>
The command <command>systemctl</command> is the main way to interact with
<command>systemd</command>. Without any arguments, it shows the status of
active units:
The command <command>systemctl</command> is the main way to interact with <command>systemd</command>. Without any arguments, it shows the status of active units:
<screen>
<prompt>$ </prompt>systemctl
-.mount loaded active mounted /
@ -30,8 +19,7 @@ graphical.target loaded active active Graphical Interface
</screen>
</para>
<para>
You can ask for detailed status information about a unit, for instance, the
PostgreSQL database service:
You can ask for detailed status information about a unit, for instance, the PostgreSQL database service:
<screen>
<prompt>$ </prompt>systemctl status postgresql.service
postgresql.service - PostgreSQL Server
@ -51,9 +39,7 @@ Jan 07 15:55:57 hagbard postgres[2390]: [1-1] LOG: database system is ready to
Jan 07 15:55:57 hagbard postgres[2420]: [1-1] LOG: autovacuum launcher started
Jan 07 15:55:57 hagbard systemd[1]: Started PostgreSQL Server.
</screen>
Note that this shows the status of the unit (active and running), all the
processes belonging to the service, as well as the most recent log messages
from the service.
Note that this shows the status of the unit (active and running), all the processes belonging to the service, as well as the most recent log messages from the service.
</para>
<para>
Units can be stopped, started or restarted:
@ -62,9 +48,7 @@ Jan 07 15:55:57 hagbard systemd[1]: Started PostgreSQL Server.
# systemctl start postgresql.service
# systemctl restart postgresql.service
</screen>
These operations are synchronous: they wait until the service has finished
starting or stopping (or has failed). Starting a unit will cause the
dependencies of that unit to be started as well (if necessary).
These operations are synchronous: they wait until the service has finished starting or stopping (or has failed). Starting a unit will cause the dependencies of that unit to be started as well (if necessary).
</para>
<!-- - cgroups: each service and user session is a cgroup

View file

@ -6,23 +6,15 @@
<title>Nix Store Corruption</title>
<para>
After a system crash, its possible for files in the Nix store to become
corrupted. (For instance, the Ext4 file system has the tendency to replace
un-synced files with zero bytes.) NixOS tries hard to prevent this from
happening: it performs a <command>sync</command> before switching to a new
configuration, and Nixs database is fully transactional. If corruption
still occurs, you may be able to fix it automatically.
After a system crash, its possible for files in the Nix store to become corrupted. (For instance, the Ext4 file system has the tendency to replace un-synced files with zero bytes.) NixOS tries hard to prevent this from happening: it performs a <command>sync</command> before switching to a new configuration, and Nixs database is fully transactional. If corruption still occurs, you may be able to fix it automatically.
</para>
<para>
If the corruption is in a path in the closure of the NixOS system
configuration, you can fix it by doing
If the corruption is in a path in the closure of the NixOS system configuration, you can fix it by doing
<screen>
<prompt># </prompt>nixos-rebuild switch --repair
</screen>
This will cause Nix to check every path in the closure, and if its
cryptographic hash differs from the hash recorded in Nixs database, the
path is rebuilt or redownloaded.
This will cause Nix to check every path in the closure, and if its cryptographic hash differs from the hash recorded in Nixs database, the path is rebuilt or redownloaded.
</para>
<para>
@ -30,7 +22,6 @@
<screen>
<prompt># </prompt>nix-store --verify --check-contents --repair
</screen>
Any corrupt paths will be redownloaded if theyre available in a binary
cache; otherwise, they cannot be repaired.
Any corrupt paths will be redownloaded if theyre available in a binary cache; otherwise, they cannot be repaired.
</para>
</section>

View file

@ -5,8 +5,7 @@
xml:id="ch-troubleshooting">
<title>Troubleshooting</title>
<para>
This chapter describes solutions to common problems you might encounter when
you manage your NixOS system.
This chapter describes solutions to common problems you might encounter when you manage your NixOS system.
</para>
<xi:include href="boot-problems.xml" />
<xi:include href="maintenance-mode.xml" />

View file

@ -5,10 +5,7 @@
xml:id="sec-user-sessions">
<title>User Sessions</title>
<para>
Systemd keeps track of all users who are logged into the system (e.g. on a
virtual console or remotely via SSH). The command <command>loginctl</command>
allows querying and manipulating user sessions. For instance, to list all
user sessions:
Systemd keeps track of all users who are logged into the system (e.g. on a virtual console or remotely via SSH). The command <command>loginctl</command> allows querying and manipulating user sessions. For instance, to list all user sessions:
<screen>
<prompt>$ </prompt>loginctl
SESSION UID USER SEAT
@ -16,10 +13,7 @@
c3 0 root seat0
c4 500 alice
</screen>
This shows that two users are logged in locally, while another is logged in
remotely. (“Seats” are essentially the combinations of displays and input
devices attached to the system; usually, there is only one seat.) To get
information about a session:
This shows that two users are logged in locally, while another is logged in remotely. (“Seats” are essentially the combinations of displays and input devices attached to the system; usually, there is only one seat.) To get information about a session:
<screen>
<prompt>$ </prompt>loginctl session-status c3
c3 - root (0)
@ -34,10 +28,7 @@ c3 - root (0)
├─10339 -bash
└─10355 w3m nixos.org
</screen>
This shows that the user is logged in on virtual console 3. It also lists the
processes belonging to this session. Since systemd keeps track of this, you
can terminate a session in a way that ensures that all the sessions
processes are gone:
This shows that the user is logged in on virtual console 3. It also lists the processes belonging to this session. Since systemd keeps track of this, you can terminate a session in a way that ensures that all the sessions processes are gone:
<screen>
# loginctl terminate-session c3
</screen>