Sunday, July 27, 2008

Erlang BGP daemon

Blog moved to blog.habets.pp.se



I'm writing a BGP daemon in Erlang. It can connect, parse update packets and announce routes.

I started up an old Cisco 2620 router and sent route announcement to it until it cried:
%SYS-2-MALLOCFAIL: Memory allocation of 65536 bytes failed from 0x8046F3C8,
Pool: Processor Free: 79840 Cause: Memory fragmentation
Alternate Pool: None Free: 0 Cause: No Alternate pool

-Process= "BGP Router", ipl= 0, pid= 84
-Traceback= 804733A4 804754C8 8046F3CC 80851CDC 8081C7D8 80832848 80832F3C 8
%BGP-5-ADJCHANGE: neighbor 172.16.x.x Down No memory
Muhahahaha!

The program:
git clone http://cvs.habets.pp.se/~marvin/git/eggpd.git
Update:
I flooded the 2620 again for a few minutes and then disconnected the peer. It stopped responding. Well almost. It answers to ping (17 % packet loss), and my existing telnet session seems to be working somewhat, although there is a delay of about 10 minutes between keystroke and something actually happening. The serial console is no better. A new telnet session I set up session only has a delay of a couple of seconds though.

I did get a "show process cpu history":
3333333333333333333333333333333333333333333333333333333333
4444443333344444666664444444444555555555544444666665555577
100
90
80
70
60
50
40
30 ******* *************** ***** *********
20 ************************************************************
10 ************************************************************
0....5....1....1....2....2....3....3....4....4....5....5....
0 5 0 5 0 5 0 5 0 5
But the cpu is not in use :-):
CPU utilization for five seconds: 0%/0%; one minute: 0%; five minutes: 2%
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
86 6860 568 12077 0.87% 0.10% 0.06% 0 BGP Scanner
80 128 105 1219 0.07% 0.09% 0.02% 67 Virtual Exec
3 780 1716 454 0.00% 0.00% 0.00% 0 OSPF Hello
4 5336 865 6168 0.00% 0.06% 0.05% 0 Check heaps
5 0 6 0 0.00% 0.00% 0.00% 0 Pool Manager
2 12 1705 7 0.00% 0.00% 0.00% 0 Load Meter
Enough for tonight.

Saturday, June 28, 2008

Buffering in pipes

Blog moved to blog.habets.pp.se



It seems like I'm missing something obvious, but I can't figure out what.

Some background info:

I've written a program (ind) that puts itself between a subprocess that it creates and the terminal. It acts as a filter for the output from the subprocess.

Like this example, where it prepends "stdout: " to all lines that the subprocess prints to standard output:
$ echo hej
hej
$ ./ind -p 'stdout: ' echo hej
stdout: hej
The dataflow is, from left to right:
echo (the subprocess) -> pipe(2) -> ind -> the terminal
(stderr runs through a separate pipe(2). Stdin has not been played with yet)

So far so good.

The problem is that the subprocess' libc (not the kernel) buffers all output using stdio (i.e. all FILE* stuff, and that includes implicit FILE* stuff like printf(3)).

Libc checks if the file descriptor backing stdout is a terminal (isatty(STDOUT_FILENO)=1 I assume), in which case it uses line buffering (write completed lines as soon as the program writes "\n").

If it's not a terminal then the output is fully buffered. Libc won't look for end-of-line characters. I'll show below why this is a problem.

In the example above echo writes to stdout using stdio into a non-terminal (specifically a pipe(2)). This makes it buffered, not line-buffered or non-buffered.

For programs other than a simple echo this not only causes you not to see the output immediately, but also destroys the order of the lines output if anything is written to stdandard error! Stderr is never fully buffered in this way, so the order from this program is 2,1,3 and not the expected 1,2,3:
fprintf(stdout, "1 stdout\n");
sleep(1);
fprintf(stderr, "2 stderr\n");
sleep(1);
fprintf(stdout, "3 stdout\n");
Try it yourself. If run in the terminal you'll get the expected order (1,2,3), while if you run it using "./test 2>&1 | cat" you get order 2,1,3. Stderr ("2") output gets thrown immediately into the pipe, while stdout ("1" and "3") are buffered until the program ends (or the buffer gets full).

If the program expects to be run in this way the fix is simple. Tell libc not to buffer stdout:
setvbuf(stdout, (char *)NULL, _IOLBF, 0);
But this needs to be run from the subprocess, and in this case you can't change the code in the subprocess.

I thought that there must be an ioctl() that can tell libc to just never buffer stdio on this file descriptor ("never buffer", "pretend this is a terminal"), but there doesn't seem to be. According to this the only way is to actually set up a pseudoterminal (pty). Ugh. Suddenly this project got a bit yucky. Different Unixes have different methods for setting up terminals, and so far the program assumed very little and was very portable. It even worked on IRIX.

Stderr can still be channeled through a normal pipe(). Stderr is never fully buffered according to C99 7.19.3. It can use a pty too, of course, but it can't use the same pty as stdout uses because then I can't differentiate them on the master end of the pty and do my filtering magic.

It seems I have to do this awkward and less portable terminal emulation, since according to that same 7.19.3, stdin and stdout are always fully buffered if (and only if) the file descriptors are non-terminals.

Gah.

The program:
git clone http://cvs.habets.pp.se/~marvin/git/ind.git