Last preparations
About $LFS
Throughout this book the environment variable LFS will be used several
times. It is paramount that this variable is always defined. It should be set
to the mount point you chose for your LFS partition. Check that your LFS
variable is set up properly with:
echo $LFS
Make sure the output shows the path to your LFS partition's mount
point, which is /mnt/lfs if you
followed our example. If the output is wrong, you can always set the variable
with:
export LFS=/mnt/lfs
Having this variable set means that if you are told to run a command like
mkdir $LFS/tools, you can type it literally. Your shell
will replace "$LFS" with "/mnt/lfs" (or whatever you set the variable to) when
it processes the command line.
Creating the $LFS/tools directory
All programs compiled in this chapter will be installed under $LFS/tools to keep them separate from the
programs compiled in the next chapter. The programs compiled here are only
temporary tools and won't be a part of the final LFS system and by keeping them
in a separate directory, we can later easily throw them away.
Later on you might wish to search through the binaries of your system to
see what files they make use of or link against. To make this searching easier
you may want to choose a unique name for the directory in which the temporary
tools are stored. Instead of the simple "tools" you could use something like
"tools-for-lfs". However, you'll need to be careful to adjust all references to
"tools" throughout the book -- including those in any patches, notably the
GCC Specs Patch.
Create the required directory by running the following:
mkdir $LFS/tools
The next step is to create a /tools symlink on
your host system. It will point to the directory we just created on the LFS
partition:
ln -s $LFS/tools /
The above command is correct. The ln command
has a few syntactic variations, so be sure to check the info page before
reporting what you may think is an error.
The created symlink enables us to compile our toolchain so that it always
refers to /tools, meaning that the compiler, assembler
and linker will work both in this chapter (when we are still using some tools
from the host) and in the next (when we are chrooted to
the LFS partition).
Adding the user lfs
When logged in as root, making a single mistake
can damage or even wreck your system. Therefore we recommend that you
build the packages in this chapter as an unprivileged user. You could
of course use your own user name, but to make it easier to set up a clean
work environment we'll create a new user lfs and
use this one during the installation process. As root,
issue the following command to add the new user:
useradd -s /bin/bash -m -k /dev/null lfs
The meaning of the switches:
-s /bin/bash: This makes
bash the default shell for user
lfs.
-m -k /dev/null: These create a home
directory for lfs, while preventing the files from a
possible /etc/skel being copied into it.
If you want to be able to log in as lfs, then give
this new user a password:
passwd lfs
Now grant this new user lfs full access to
$LFS/tools by giving it ownership
of the directory:
chown lfs $LFS/tools
If you made a separate working directory as suggested, give user
lfs ownership of this directory too:
chown lfs $LFS/sources
Next, login as user lfs. This can be done via a
virtual console, through a display manager, or with the following substitute
user command:
su - lfs
The "-" instructs su to start a
login shell.
Setting up the environment
We're going to set up a good working environment by creating two new
startup files for the bash shell. While logged in as
user lfs, issue the following command to create a new
.bash_profile:
cat > ~/.bash_profile << "EOF"
exec env -i HOME=$HOME TERM=$TERM PS1='\u:\w\$ ' /bin/bash
EOF
Normally, when you log on as user lfs,
the initial shell is a login shell which reads the
/etc/profile of your host (probably containing some
settings of environment variables) and then .bash_profile.
The exec env -i ... /bin/bash command in the latter file
replaces the running shell with a new one with a completely empty environment,
except for the HOME, TERM and PS1 variables. This ensures that no unwanted and
potentially hazardous environment variables from the host system leak into our
build environment. The technique used here is a little strange, but it achieves
the goal of enforcing a clean environment.
The new instance of the shell is a non-login shell,
which doesn't read the /etc/profile or
.bash_profile files, but reads the
.bashrc file instead. Create this latter file now:
cat > ~/.bashrc << "EOF"
set +h
umask 022
LFS=/mnt/lfs
LC_ALL=POSIX
PATH=/tools/bin:/bin:/usr/bin
export LFS LC_ALL PATH
EOF
The set +h command turns off
bash's hash function. Normally hashing is a useful
feature: bash uses a hash table to remember the
full pathnames of executable files to avoid searching the PATH time and time
again to find the same executable. However, we'd like the new tools to be
used as soon as they are installed. By switching off the hash function, our
"interactive" commands (make,
patch, sed,
cp and so forth) will always use
the newest available version during the build process.
Setting the user file-creation mask to 022 ensures that newly created
files and directories are only writable for their owner, but readable and
executable for anyone.
The LFS variable should of course be set to the mount point you
chose.
The LC_ALL variable controls the localization of certain programs,
making their messages follow the conventions of a specified country. If your
host system uses a version of Glibc older than 2.2.4,
having LC_ALL set to something other than "POSIX" or "C" during this chapter
may cause trouble if you exit the chroot environment and wish to return later.
By setting LC_ALL to "POSIX" (or "C", the two are equivalent) we ensure that
everything will work as expected in the chroot environment.
We prepend /tools/bin to the standard PATH so
that, as we move along through this chapter, the tools we build will get used
during the rest of the building process.
Finally, to have our environment fully prepared for building the
temporary tools, source the just-created profile:
source ~/.bash_profile
About SBUs
Most people would like to know beforehand how long it approximately
takes to compile and install each package. But "Linux from Scratch" is built
on so many different systems, it is not possible to give actual times that are
anywhere near accurate: the biggest package (Glibc) won't take more than
twenty minutes on the fastest systems, but will take something like three days
on the slowest -- no kidding. So instead of giving actual times, we've come up
with the idea of using the Static Binutils Unit
(abbreviated to SBU).
It works like this: the first package you compile in this book is the
statically linked Binutils in , and the time it
takes to compile this package is what we call the "Static Binutils Unit" or
"SBU". All other compile times will be expressed relative to this time.
For example, the time it takes to build the static version of GCC is
&gcc-time-tools-pass1;s. This means that if on your system it took 10 minutes
to compile and install the static Binutils, then you know it will take
approximately 45 minutes to build the static GCC. Fortunately, most build times
are much shorter than the one of Binutils.
Note that if the system compiler on your host is GCC-2 based, the SBUs
listed may end up being somewhat understated. This is because the SBU is based
on the very first package, compiled with the old GCC, while the rest of the
system is compiled with the newer GCC-&gcc-version; which is known to be
approximately 30% slower.
Also note that SBUs don't work well for SMP-based machines. But if you're
so lucky as to have multiple processors, chances are that your system is so fast
that you won't mind.
If you wish to see actual timings for specific machines, have a look at
.
About the test suites
Most packages provide a test suite. Running the test suite for a newly
built package is generally a good idea, as it can provide a nice sanity check
that everything compiled correctly. A test suite that passes its set of checks
usually proves that the package is functioning as the developer intended. It
does not, however, guarantee that the package is totally bug free.
Some test suites are more important than others. For example, the test
suites for the core toolchain packages -- GCC, Binutils, and Glibc -- are of
the utmost importance due to their central role in a properly functioning
system. But be warned, the test suites for GCC and Glibc can take a very long
time to complete, especially on slower hardware.
Experience has shown us that there is little to be gained from running
the test suites in . There can be no
escaping the fact that the host system always exerts some influence on the
tests in that chapter, often causing weird and inexplicable failures. Not only
that, the tools built in are
temporary and eventually discarded. For the average reader of this book we
recommend not to run the test suites in . The instructions for running those test
suites are still provided for the benefit of testers and developers, but they
are strictly optional for everyone else.
A common problem when running the test suites for Binutils and GCC is
running out of pseudo terminals (PTYs for short). The symptom is a very high
number of failing tests. This can happen for several reasons, but the most
likely cause is that the host system doesn't have the
devpts file system set up correctly. We'll discuss this in
more detail later on in .
Sometimes package test suites will give false failures. You can
consult the LFS Wiki at to verify that these
failures are normal. This applies to all tests throughout the book.