Frequently Asked Questions

D. E. Evans' Core Linux Distribution is the outgrowth of my persistent and continuing use of Josh Devin's Core Linux Distribution. Whether it is building Linux From Scratch (LFS), or a GNU/Linux installation for a specific purpose, Core (not to be confused with CoreOS or Tiny Core) provides a small foot print for adding on what is wanted. It is a modern GNU/Linux system. It isn't rhetro-computing, but instead is a classic approach to distributing and building Linux. It started from a minimal base, i386 CPU tuned with a tiny, non-modular kernel config. It meant that building up the system also could mean rebuilding parts of the system if optimization was required. This is exactly what I did with Core.

I consider this project to be the successor to Core, Core 2, and Prime. It can be considered as the Core 3 project for AMD64 systems, and an existing outgrowth of the original Core in early versions for i386, then retuned to i486, as well as fixes for Prime as I used and explored it (which was i686). However, the oldest systems I had available to test was my NEC Pentium, which has since been sold, and a Pentium II and Pentium III of a friend. So I optimized against the gamut of the main Intel 32-bit architectures.


Josh Devin's Core Linux Distribution started in 2002, presumably as a project building LFS (3.3?). It had its formal release on 4 April 2003, with packages dated 3 February 2003, and symbolic links showing an install of the IUR 14 February 2003, built against the Linux 2.4.18 kernel (already a stability kernel). A small community formed around its use. I installed Core and began using it around this time, fashioning build scripts to preserve my work, perhaps influenced by Slackware. I slowly began by rebuilding Core, then once confident I began upgrading the existing packages and adding new ones. It became apparent that an in-place upgrade feature was needed for CorePKG, so I modified it to do so. I had no reason to upgrade to the 2.6 kernel, as I was running hardware that worked fine with the existing 2.4 kernel. From time to time, I had to rebuild the toolchain, following the mainline 2.4 kernel through the end of Willy Tarreau's stable Linux patches.

Will the original Core work on an actual i386 SX or DX Intel computer? I believe so, if it the hardware can meet the requirements, which may include after-market upgrades. For instance, a 500 MB hard drive and 32 MB of RAM should be enough, though it perhaps might be slow. The last mainline kernel that included the i386 architecture code was 3.7, patched to 3.7.10 (released 27 February 2013). The last patches to a kernel that supported i386 was 3.2, so perhaps 3.2.102 is the last kernel from a security stand point (released 1 June 2018). My guess is that neither kernel had a lot of love in terms of i386 patching and testing. Since both are years ago end-of-life, it may be better to run the 3.7 mainline kernel, as released from Torvalds, for i386 rhetro-computing. i486 will require kernel configuration care, especially with older models. The Pentium and Pentium Pro should work fine with the original Core. I extensively tested it on a Pentium II and III which worked wonderfully when getting the right kernel configuration.

Around Christmas in 2010, Willy announced that Linux 2.4 would go end-of-life in a year with the release of I had assumed a package would be released, but that never occured. On 9 April 2012, Tarreau indicated that he had pushed a set of patches for the kernel, but that a final tarball would not be released. One patch was added to this on 6 October 2012, and the 2.4 kernel series was done.

It was time to put together a new installation CD, to preserve my work before moving on to Linux 2.6, and to support a small user base I had at the time. Unfortunately, because of that user base, I had a lot of experimental builds, so what I released was not the stable, streamlined build that was Core. I wrote Josh, and he licensed CorePKG to me under the GPL. The first CD was in 2010 with Linux kernel I included all sorts of things from my experimental work with that release. I called it sinuhe's GNU/Linux Operating System (sGOS) at first, and posted the experimental version online. However, Devin's original design made more sense, and was far more meticulous, so I returned to it. I continue to use that system at home, off-and-on, accumulating package updates with the newest sources I can still build with it.

In 2008, the Core 2 project at (released 1 May 2007, using had morphed into the Prime distribution, culminating in and, which released a year later on 28 May 2008. However, though I followed the community, I was updating the 2.4 kernel and didn't use Core 2. Prime GNU/Linux used a newly designed build kit, without package management, perhaps another artifact of the LFS approach, but it lacked Devin's meticulous, trim approach to building a minimal system for installation. It seems to be a basic upgrade of Core 2. I explored the system carefully, fixed some of its bugs, and then created a kit for converting it back to CorePKG, and then continued on. At this point the Linux kernel that came with Prime was receiving stable patches, which had another 16 ( Greg K. H. recommended to move to 2.6.27, which Willy Tarreau picked up about the same time as calling 2.4 EOL, then later moved on to 2.6.32. I continued following Willy's kernel until 3.10 went end-of-life.

It had become apparent that my 32-bit hardware was aging, and not a surprise my only Pentium 4 ultimately overheated. The Pentium M was the way forward, except AMD64 was where the industry was going. I boot strapped a 64-bit kernel on a Dell workstation that my neighbor gave me, continually rebuilding from a chroot until I had a small base for AMD64. I stopped buying systems years ago, and only use hand-me-down computers. The last 4.14 kernel came out and I decided to move on. The 6.6 mainline kernel will be the final resting point for the next primary release, which I'm currently building and stabilizing. From there, we'll see what the future holds based on what hardware I have at home and based on what requests I get from others.


Why this distribution of GNU/Linux?
I caught the bug from early (2.3) Slackware in 1996, and had an aspiration to explore the Linux From Scratch HOWTO (now the LFS book), then later Josh Devin's Core Linux Distribution (or Coredistro, its SourceForge project name).
Why not Slackware?
From Coredistro, I can build up the software with my own configuration and preferences. It's easier to maintain and learn a small system, as well as keep addon packages separate to be added when and if I need them. This also allows others to do the same. I continue to keep a copy of Slackware around for my home laptop, but work has moved me over to Apple and I can no longer run a Slackware laptop on the work VPN network, (which I did for as long as they let me!).
Why not Prime?
The Core 2 project was interesting, especially as it used the 2.6 kernel, but Josh's system was much smaller, miticulously constructed, and the 2.4 kernel was working just fine with my aging hardware. I did ultimately switch to Prime, but in the end converted its build system over to CorePKG, then continued upgrading as I always had. Prime's main influence was keeping a modular kernel package for the transfer to different systems with GRUB, and its /boot/ directory layout.
What hardware does this distribution run on at home?
Currently, my main system is a Dell Precision T5400 (Intel Xeon) that my neighbor donated. I also use other systems that people donate.
Why is there no multilib, e.g. a lib64/ directory?
To keep it small. The only reason to keep 32-bit binaries around is to run 32-bit software that was built for, or builds only on, 32-bit Intel. This is the practice followed by LFS and Gentoo as well. Slackware provides lib64/ directories in case a 32-bit multi-lib is desired.
Does the ISO support USB?
AMD64 uses USB instead of floppy as its base boot configuration, though firmware can be programmed for whatever. The ISO is also crafted to continue to be suitable for optical media (e.g. DVD).
What is the patching cycle?
For a time, I provided security patches for the currently available ISO. However, I stopped doing that. The ISO is intended for installing, recovering, and building up your own system. It is up to the user to manage installed and additional packages. Since I work for a living, this is done for my own systems on the weekends or free evenings, as I use them. This is not like Slackware where there is a stable tree maintained with security patches.
What is the release cycle?
Here's where things sit:
  • (Pentium III)
  • (Pentium M, x86-64)
  • (x86-64)
  • 3.10.108 (x86-64)
  • 4.14.336 (x86-64)
  • 6.6 (x86-64)
6.1 is a beta and snapshot, and 6.6 will become the final and only publicly available release. If at somepoint a chroot fails from the IUR for what I build at home, I'll post a stable snapshot of the IUR, and ultimately a final release with the last stable kernel patch. I upgrade packages at home based on need and security, so a snapshot will have whatever is secure at the time with a stable toolchain and packages built against it.
Where are the sources?
Source code is provided for the Linux kernel of the ISO, but not provided for other packages. Each package contains information on where to obtain the source code (including source patches, though this tends to be rare): corepkg -q corepkg-7+1.cpk. Sources for ISOs no longer available are not supported, but feel free to ask me: I may still have a backup somewhere.
Why isn't the directory structure complete?
I don't necessarily follow the FHS or the LSB, and POSIX only requires /dev/, and then only for shm and null. Instead, I've preserved the directory structure of Devin's Coredistro, and removed any blank directories. The base-*.cpk package has symlinks related to the /usr/ merge change. See and the linked article The Case for the /usr Merge. I certainly have made some changes to Core's original design and Linux has evolved. Those changes had once been documented, and some are to follow, but it's been over 20 years.
Why is the /usr/sbin/ directory missing?
This is an early change from the original Coredistro. /sbin/ was introduced by Sun for duplicate, statically compiled binaries, used for the boot process. Most Linux distributions' sbin/ directories, and the FHS descriptions, contain dynamically shared binaries, with an inconsistent sense that they are for administrative use only. An sbin/ directory for isolating administrative binaries would make sense for repurposing sbin/ if those binaries were restricted to being run only by the root user (as they are with Devin's Core Linux Distribution), or perhaps an administrative group, but that is not the case either. Even in 2003 this was not interpretted consistently, and the FHS reflects well the confusion here. The /usr/sbin/ directory I've eliminated as superfluous. Instinctively, I was starting to think everything should be in /bin/ and /lib/ with an /sbin symlink, but after the systemd position arguing this approach was broken, I did an about face and moved everything entirely to /usr/, (the /sbin/ symlink to /usr/bin/ is necessary as some applications still hard code to /sbin/ programs). Sadly, the /usr/ directory, the original research Unix directory that /home/ replaced, must be repurposed. A small price to pay for compatibility and consistency. If a tool needs to be restricted, it still can be: a separate directory is unnecessary.
Why is systemd not used?
systemd is a large collection of services and utilities, not to say the individual pieces are big. My goal is to keep the system small and simple, and sysvinit is very small as implemented. systemd has its benefits, but it's not really how I use the system. Perhaps in the future it may be. Some of the goals met by systemd come from the cruft of distributions over the years. There's not much cruft in Core.
Why corepkg and not a more traditional package manager?
CorePKG is really simple. I've added in-place upgrading as a feature. Until there's a pressing need, I don't want to add features. Plus, it's what was provided by Core, and I've never had a need for something bigger, nor does the purpose of Coredistro. Volkerding has arguments around why he stays with tarballs for Slackware, which remains similar to my reasoning.
Why does CorePKG not have post-install scripts?
I've considered adding this feature. I don't think it would be a difficult addition. As of yet, keeping things simple seems beneficial. Installing a configuration file in a place that will be modified seems to be introducing an unnecessary home grown complexity. Certainly, pre and post installation functionality can be used for other things (e.g. info manuals), but it seems as a feature primarily used for configuration. I find that the use of configuration management tooling is a better approach. I'm open to being convinced otherwise. For now, configuration examples, when undocumented, can be provided in /usr/share/ with simple defaults. This approach gives the user and administrator control, instead of the packager, and avoids pesky .save or .new style files. I don't like to have services started by default, or have them preconfigured, except perhaps syslogd.
Why are static library archives provided. Doesn't LFS remove them?
I believe it was Sun that first started using shared object files with C programs. For an OS distribution this probably makes sense, and the reasoning given by LFS fits. However, there's some great arguments on the Plan 9 mailing list for why static compilation has merit. When writing my own programs, sometimes I like to compile statically for longevity and distribution portability. Plus, Coredistro provided them too.

CorePKG is ©2001-2003 Josh Devin, and ©2010-2017, 2020, 2023 by David Egan Evans.

©2023-2024 David Egan Evans.