Libstream - C++ network I/O library.
Libstream/nanostream is a small (15000 lines of code) C++ networking library, in the same style of the STL. Once opened standard algorithms (or the library specializations) can be used to do input/output to and from network streams. It currently supports TCP and UNIX streams. Other kind of transport should be easy to add (SSL transport is in the work). While beyond the scope of the library, for completeness file I/O is also supported. Currently it only supports synchronous I/O, asynchornous I/O is in the work.

UPDATE: Boost has finally a network lib. Libstream is very likely to be reimplemented on top of Boost.asio. Work is currently done in the coroutine module.

Here is an example echo server to get a taste of the library:
#include <utility/test_helper.hpp>
#include <boost/function_output_iterator.hpp>
#include <stream/algorithm.hpp>
#include <stream/inet.hpp>
#include <stream/buffer_type.hpp>
#include <stream/buffered_stream_adaptor.hpp>
#include <stream/input_stream_iterator.hpp>
#include <stream/output_stream_iterator.hpp>

namespace stream {

  /* Generate a buffered stream from the tcp stream using a libstream adaptor */
  typedef buffered_stream_adaptor<domain_inet::stream_type> stream_type;

  /*
   * This functor reads a byte at a time and perform
   * a shutdown of the read direction of the stream
   * when it detects two consecutive newline/carriage return
   * (i.e. the sequence "\n\r\n\r").
   * This functor is for exposition only, libstream has actually a
   * better, more generic version of this.
   */
  class check_termination {
    enum state {none, nl_received};
    state m_state;
    int m_count;
    buffer_type& m_buffer;
    stream_type& m_stream;

  public:
    check_termination (buffer_type& buf, stream_type stream) :
      m_state(none), m_count(0), m_buffer(buf), m_stream(stream) {}

  public:
    void operator() (char c) {
      switch(m_state) {
      case nl_received:
        m_state = none;
          if(c == '\n') {
            m_count ++;
            if(m_count == 2)
              m_stream.shutdown(stream_type::read);
          } else if(c == '\r')
            m_state = nl_received;
          break;
        case none:
          if(c == '\r')
            m_state = nl_received;
          else m_count = 0;
      }
      m_buffer.push_back(c);
    }
  };

  /* The server itself */
  void echo_service() {

    /* Creates an acceptor that listen at the local port 2007 */
    domain_inet::acceptor_type acceptor("127.0.0.1", "2007");

    /* keep running */
    while(true) {
      try{

	/* an unconnected buffered stream */
        stream_type stream;

	/* connect the stream to a client */
        acceptor.accept(stream);

	/* a temporary buffer */
        buffer_type my_buffer;

	/* copy data to the temporary buffer until the stream read direction is closed */
        stream::copy(input_stream_iterator<stream_type>(stream),
                     input_stream_iterator<stream_type>(),
                     boost::make_function_output_iterator
                     (check_termination(my_buffer, stream)));

	/* output the buffer to the stream */
        stream::copy(my_buffer.begin(),
                     my_buffer.end(),
                     output_stream_iterator<stream_type>(stream));

      /* catch (and ignore) errors */
      } catch (stream_exception) {}
    }
  }
}


int main() {
  stream::echo_service();
}


		
Easy? Well actually could be shorter: the newline-matching functor is not really needed, libstream has a better one ready for use, and the temporary buffer is not necessary because you can copy the input stread directly to the output stream.
On the other hand a real server might want to handle more than one connection at once. For this and more read the documentation and try the code.
For updates, remember to check this page periodically.

An initial (0.1) release is currently available from sourceforge.

The only documentation currently available is an introductory pdf, some early extremely incomplete source documentation can be built with Doxygen. Bug the author to upgrade this page.

Giovanni P. Deretta
gpderetta at gmail dot com