[geeks] UNIX development and makefile discussion
microcode at zoho.com
microcode at zoho.com
Fri Aug 16 02:12:36 CDT 2013
On Thu, Aug 15, 2013 at 03:42:24PM -0500, Jonathan Patschke wrote:
> On Wed, 14 Aug 2013, microcode at zoho.com wrote:
>
> >I have heard of cmake and of course autotools but I don't know big a project
> >has to be before that stuff starts making sense.
>
> cmake can make sense, but then people consuming your code need cmake to
> build and install it, and cmake is pretty heavy.
Thanks, this is a concern of mine in the abstract and I'm glad to hear
someone address it or at least mention that it is a possible issue. I'm used
to writing and shipping code that has no dependencies except the OS it runs
on so I find the UNIX model troubling generally. But we also only ship
executables and never source so it's a very different environment in many ways.
I'm annoyed when I have to run after libraries and tools to build anything
other people have written for UNIX so I would be inclined to burden myself
rather than the user of whatever I write. Again not sure if this is a real
issue since I'm not planning to distribute anything I write.
> My biased[1] opinion is that autotools never makes sense.
It would seem all intelligent people agree with this.
> >What really strikes me about UNIX is how much the tools get in my way
> >and take up much of my time.
>
> This is true of learning any new development platform.
That's true but it's not *only* because it's a new platform.
> They only get in your way and take up your time because you're thinking in
> a different paradigm.
True, paradigm shifting can be very hard. But there are qualitative
differences, and some platforms are better than others for developers,
users, etc.
> The first part you can either learn the tools the platform gives you or get
> tools that work more like you expect. The latter is almost always an
> uphill battle.
Very true. In fact what I'm writing is a set of tools because I don't like
the ones that exist now. But writing tools is a big part of what I've always
done. I don't find doing that a problem in itself. I don't like that I feel
I *have* to write *these* particular tools, but if I can tolerate UNIX better
after I have stuff that works the way I want it will be less annoying to do
development on some of the machines I have. I'm trying to be practical but
I'm not very good at it.
> Consider this block of Makefile:
>
> TESTS_SRC= < list of .cpp files in tests/ >
> TESTS= $(TESTS_SRC:%.cpp=%)
> OBJS= < all of my library object files >
> TESTCOMMOBJS= < all of my test framework object files >
>
> %.o: %.cpp
> $(CXX) $(CPPFLAGS) $(CXXFLAGS) -c -o $@ $<
>
> tests/%: tests/%.o $(TESTCOMMOBJS)
> $(CXX) -0 $@ $(TESTCOMMOBJS) $@.o $(LDFLAGS)
>
> There I have special build rules just for my unit tests, by virtue of the
> directory they live in. They all need a tiny chunk of code to talk to the
> test framework, they all consist of a single C++ source file, and this one
> rule makes that all very concise. When I add a new test, I only need to
> update the list in TESTS_SRC to see that it gets built.
I can follow makefiles now to some extent but writing fancy ones from
scratch is beyond me. I'm sure this is a matter of time and interest. I like
your example.
> I like using separate directories for different concerns. Unit-tests,
> documentation, and test/default data are all separate concerns.
Yes, I agree with this. I'm used to having one kind of file in each
directory (not the terms I would use, but good enough for general
discussion) and I would like to manage things that way if I could but right
now it's more than I can deal with. I don't want to spend what little time I
have managing things and solving tool puzzles instead of writing code so for
this project for now I am going with the "production" code in one directory
and the tests in another.
>
> Structurally, I like to think of all my projects as libraries with
> front-ends. Maybe there's only one front-end and it's never replaced;
> that's okay. I usually lay out my projects like this:
>
> / - Makefile, README, TO-DO list, and CHANGELOG
> /doc - Contains design notes and long-form documentation
> /include - Contains "library" headers
> /legal - Contains licenses for any 3rd-party code I've included
> /man - Contains manual page sources
> /mk - Included makefile bits (portability, per-library rules)
> /src - Contains "library" code
> /tests - Contains unit and integration tests
> /tests/exp- Contains expected output for each test
> /tests/in - Contains input data for tests
> /tests/out- Receives actual output for each test
> /tools - Contains programs that get made to facilitate the build
That's nice!
> I then have a separate directory for each executable program that gets
> built. Maybe there's a server portion and a client portion. Each of
> those directories has "header files" that are used only within that
> front-end versus the <header files> that are part of the library code.
I understand the part about grouping related executables, but did you mean
you have a separate directory for each executable? I mean that's what you
seem to have written but I don't think I understand what you meant.
> However, to more directly address your concern about functionality that
> gets used in multiple places, I'd always opt to separate it out into a
> library (specifically, an archive library). Don't feel silly if you end
> up with three or four library directories and only a tiny driver program
> for each of the things you actually intended to ship. This is, IMO, a
> sign of well-designed code.
I have to look into this more. From a distribution standpoint (not that it's
relevant to what I'm working on now) libraries could be a real problem. For
one guy developing stuff for his own use, libraries are annoying because the
interface is compiler specific. (This is not C/C++).
Thanks a lot for all the info and detail. I'm saving your post!
More information about the geeks
mailing list