[geeks] UNIX development and makefile discussion

Jonathan Patschke jp at celestrion.net
Thu Aug 15 15:42:24 CDT 2013


On Wed, 14 Aug 2013, microcode at zoho.com wrote:

> I have heard of cmake and of course autotools but I don't know big a project
> has to be before that stuff starts making sense.

cmake can make sense, but then people consuming your code need cmake to
build and install it, and cmake is pretty heavy.  If that's okay, cmake is
a good tool to learn; I don't honestly find it much more useful than GNU
make for what I do, though.

My biased[1] opinion is that autotools never makes sense.

Let's consider the value proposition of autotools: That easy-to-resolve
portability issues are addressed without your intervention.  That's good,
maybe.

Now let's consider how most FOSS projects actually seem to _use_
autotools: "I'll use autotools so that I only ever have to test the build
on my Linux Mint box, and all those DG/UX users can run my code
unmodified.  Whee!"  This is insanity.

Most of what you get with autotools you can get with GNU make (or BSD
make, but GNU make seems to be everywhere).  Want to install somewhere
else?  Use a PREFIX macro in your makefile.

Say you have some code that only makes sense on a given platform.  Put
this in your makefile:

         OS=             $(shell uname -s | tr a-z A-Z)
         CPPFLAGS+=      -DOS_$(OS)

And this in your C code:

         #if defined(OS_DARWIN)
           ...
         #elif defined(OS_IRIX)
           ...
         #endif

And there you go.

Yes, it's a manual process.  Correctly porting code is almost _always_ a
manual process.  Autotools is half lie and half happy accident.

> What really strikes me about UNIX is how much the tools get in my way
> and take up much of my time.

This is true of learning any new development platform.  They only get in
your way and take up your time because you're thinking in a different
paradigm.  You can either learn the tools the platform gives you or get
tools that work more like you expect.  The latter is almost always an
uphill battle.

Consider this block of Makefile:

         TESTS_SRC=    < list of .cpp files in tests/ >
         TESTS=        $(TESTS_SRC:%.cpp=%)
         OBJS=         < all of my library object files >
         TESTCOMMOBJS= < all of my test framework object files >

         %.o:    %.cpp
                 $(CXX) $(CPPFLAGS) $(CXXFLAGS) -c -o $@ $<

         tests/%:    tests/%.o $(TESTCOMMOBJS)
                 $(CXX) -0 $@ $(TESTCOMMOBJS) $@.o $(LDFLAGS)

There I have special build rules just for my unit tests, by virtue of the
directory they live in.  They all need a tiny chunk of code to talk to the
test framework, they all consist of a single C++ source file, and this one
rule makes that all very concise.  When I add a new test, I only need to
update the list in TESTS_SRC to see that it gets built.

> At this point writing makefiles isn't a major issue for me. It happened
> in stages and I have good skeletons for the kind of work I'm doing. My
> problem is more about how to organize the code so I can manage pieces
> the way I like to work. I probably should have tried to make a picture
> to explain what I'm asking about but the ones I draw usually don't help
> that much.

I like using separate directories for different concerns.  Unit-tests,
documentation, and test/default data are all separate concerns.

Structurally, I like to think of all my projects as libraries with
front-ends.  Maybe there's only one front-end and it's never replaced;
that's okay.  I usually lay out my projects like this:

   /         - Makefile, README, TO-DO list, and CHANGELOG
   /doc      - Contains design notes and long-form documentation
   /include  - Contains "library" headers
   /legal    - Contains licenses for any 3rd-party code I've included
   /man      - Contains manual page sources
   /mk       - Included makefile bits (portability, per-library rules)
   /src      - Contains "library" code
   /tests    - Contains unit and integration tests
   /tests/exp- Contains expected output for each test
   /tests/in - Contains input data for tests
   /tests/out- Receives actual output for each test
   /tools    - Contains programs that get made to facilitate the build

I then have a separate directory for each executable program that gets
built.  Maybe there's a server portion and a client portion.  Each of
those directories has "header files" that are used only within that
front-end versus the <header files> that are part of the library code.

If there's a front-end that doesn't get built (maybe it's mostly HTML and
Javascript, or a Perl thing), it still gets a separate directory.

If there are multiple "library"-type things, I tend to give them each a
directory, rather than a common "src" directory, and I mirror that
structure under "include".  So, libfoo gets /libfoo and /include/libfoo.

I'm also a fan of the One True Makefile design.  Yes, this occasionally
means you get to include per-directory rules.  It also means that
splitting out the library code isn't as easy as running tar.  However, it
means that when server/foo/bar/baz.cpp won't build, you can run 'make
server/foo/bar/baz.o' from the root of the project as you iteratively fix
it.

I usually make an exception for when I include[2] other projects.  I just
include them and run their Makefiles as part of mine.  Sometimes, though,
I've found it useful to "unroll" the included Makefile (especially when
autoconf is involved and it hard-codes the prefix path into somewhere
uncomfortable) into one of the files in /mk.


However, to more directly address your concern about functionality that
gets used in multiple places, I'd always opt to separate it out into a
library (specifically, an archive library).  Don't feel silly if you end
up with three or four library directories and only a tiny driver program
for each of the things you actually intended to ship.  This is, IMO, a
sign of well-designed code.


[1] Credentials: I develop for FreeBSD, HP-UX, Windows, Solaris, Linux,
     and OpenBSD, in roughly descending order.  My largest current project
     has 140-ish source files including several that are auto-generated by
     flex, bison, and a few Perl scripts.
[2] Most of my development is on programs that are only used within an
     organization.  If you want to assume a particular version of a
     library, the easiest thing is to just include it and trigger it to
     compile to where you expect it with the options you want.
-- 
Jonathan Patschke  |  "For a successful technology, reality must take
Elgin, TX          %   precedence over public relations, for nature
USA                |   cannot be fooled."           --Richard Feynman


More information about the geeks mailing list