The Boost C++ Libraries has been updated. The second edition was published in September 2014 and introduces 72 Boost libraries with more than 430 examples. It is available at Amazon, Barnes and Noble, for Kindle, as an Epub and as a PDF file. The second edition is based on C++11 and the Boost libraries 1.55.0 and 1.56.0 with the latter version having been released in August 2014.

Find the second edition online at http://theboostcpplibraries.com/

The Boost C++ Libraries


Chapter 7: Asynchronous Input and Output


Table of Contents

This book is licensed under a Creative Commons License.


7.1 General

This chapter introduces the Boost C++ Library Asio which centers on asynchronous input and output. The name says it all: Asio stands for asynchronous input/output. This library allows C++ to process data asynchronously as well as platform independent. Asynchronous data processing means that tasks are triggered without waiting for their completion. Instead, Boost.Asio triggers an application once a task has completed. The main advantage of asynchronous tasks is the possibility to perform other tasks without blocking the application while waiting for their completion.

Typical examples for asynchronous tasks are network applications. If data is sent over e.g. the Internet, it is generally important to know whether or not it has been sent successfully. Without a library such as Boost.Asio, the return value of a function would be evaluated. This, however, would require to wait until all data has been sent and either an acknowledge or an error code is available. Using Boost.Asio, the process is split into two individual steps: The first step starts the data transmission as an asynchronous task. Once the transmission has finished either successful or with an error, the application is notified about the result in a second step accordingly. The crucial difference is that the application does not need to block until the transmission has finished but can execute other operations in the meantime.


7.2 I/O Services and I/O Objects

Applications utilizing Boost.Asio for asynchronous data processing are based on so-called I/O services and I/O objects. While I/O services abstract operating system interfaces that allow asynchronous data processing in the first place, I/O objects are used to initiate certain operations. Whereas Boost.Asio only provides one class named boost::asio::io_service for the I/O service, implemented as an optimized class for each operating system supported, it contains several classes for the individual I/O objects. Among these, the class boost::asio::ip::tcp::socket is used to send and receive data over a network while the class boost::asio::deadline_timer offers a timer that either elapses at a fixed point in time or after a certain period. The timer is used in the following first example since it does not require any knowledge about network programming compared to many of the remaining I/O objects provided by Asio.

#include <boost/asio.hpp> 
#include <iostream> 

void handler(const boost::system::error_code &ec) 
{ 
  std::cout << "5 s." << std::endl; 
} 

int main() 
{ 
  boost::asio::io_service io_service; 
  boost::asio::deadline_timer timer(io_service, boost::posix_time::seconds(5)); 
  timer.async_wait(handler); 
  io_service.run(); 
} 

The function main() first defines an I/O service io_service that is used to initialize the I/O object timer. Just like boost::asio::deadline_timer, all I/O objects typically expect an I/O service as their first argument to the constructor. Since timer resembles an alarm, the constructor of boost::asio::deadline_timer can be passed a second argument indicating either a point in time or a period after which the alarm should go off. The above example specifies a period of five seconds for the alarm which starts counting once timer has been defined.

While it would be possible to call a function that returns after five seconds, an asynchronous operation is started with Asio by calling the method async_wait() and passing the name of the handler() function as the single argument. Please note that only the name of the handler() function is passed but the function itself is not called.

The advantage of async_wait() is that the function call returns immediately instead of waiting five seconds. Once the alarm goes off, the function provided as the argument is called accordingly. The application thus can execute other operations after calling async_wait() instead of just blocking.

A method such as async_wait() is described as non-blocking. I/O objects typically also provide blocking methods in case the execution flow should be blocked until a certain operation has finished. For example, the blocking wait() method could have been called for boost::asio::deadline_timer instead. Since it is a blocking call, it does not require a function name but rather returns at a specific point in time or after a certain period.

While looking at the source code of the above example, it can be noticed that after the call to async_wait(), a method named run() is called on the I/O service. This is mandatory since control needs to be taken over by the operating system in order to call the handler() function after five seconds.

While async_wait() starts an asynchronous operation and returns immediately, run() actually blocks. Execution therefore stops at the call of run(). Ironically, many operating systems support asynchronous operations via a blocking function only. The following example shows why this limitations is typically not an issue.

#include <boost/asio.hpp> 
#include <iostream> 

void handler1(const boost::system::error_code &ec) 
{ 
  std::cout << "5 s." << std::endl; 
} 

void handler2(const boost::system::error_code &ec) 
{ 
  std::cout << "10 s." << std::endl; 
} 

int main() 
{ 
  boost::asio::io_service io_service; 
  boost::asio::deadline_timer timer1(io_service, boost::posix_time::seconds(5)); 
  timer1.async_wait(handler1); 
  boost::asio::deadline_timer timer2(io_service, boost::posix_time::seconds(10)); 
  timer2.async_wait(handler2); 
  io_service.run(); 
} 

The above application now utilizes two I/O objects of type boost::asio::deadline_timer. The first I/O object represents an alarm going off after five seconds while the second one represents an alarm going off after ten seconds. After each period has elapsed, the functions handler1() and handler2() are called accordingly.

The run() method is again called on the sole I/O service at the end of main(). As previously mentioned, this function actually blocks execution passing control to the operating system which takes over the asynchronous processing. With the aid of the operating system, the handler1() function is called after five seconds while the handler2() function is called after 10 seconds, respectively.

At first sight, it may come as a surprise that asynchronous processing requires calling the blocking run() method. However, since the application needs to be prevented from terminating, this does actually not pose any issue. If run() would not block, main() would actually finish and thus terminate the application. If execution of the application should not be blocked, run() should be called within a new thread since it naturally blocks the current thread only.

Once all asynchronous operations of the particular I/O service have been completed, controls is returned back to the run() method which simply returns. Both example applications terminate once all the alarms have gone off.


7.3 Scalability and Multithreading

Developing an application using a library such as Boost.Asio differs from the usual C++ style. Functions that may take longer to return are no longer called in a sequential manner. Instead of calling blocking functions, Boost.Asio starts asynchronous operations instead. Functions that are required to be called once the operation has finished are implemented within the corresponding handler. The drawback of this approach is the physical separation of the sequentially executed functions which certainly makes the corresponding code more difficult to understand.

A library such as Boost.Asio is typically used to achieve a higher efficiency of the application. Without the need to wait for a particular function to finish, an application can perform other tasks in between, e.g. starting another operation that may take a while to complete.

Scalability describes the property of an application to effectively benefit from additional resources. Using Boost.Asio is already recommended if long-lasting operations should not block other operations. Since today's PCs are typically equipped with multi-core processors, the usage of threads can increase the scalability of an application based on Boost.Asio even further.

If the run() method is called on an object of type boost::asio::io_service, the associated handlers are invoked within the same thread. By using multiple threads, an application can call multiple run() methods simultaneously. Once an asynchronous operation has finished, the corresponding I/O service will then execute the handler within one of the threads. If a second operation has finished shortly after the first, the I/O service can execute the handler within a different thread without needing to wait for the first handler to terminate.

#include <boost/asio.hpp> 
#include <boost/thread.hpp> 
#include <iostream> 

void handler1(const boost::system::error_code &ec) 
{ 
  std::cout << "5 s." << std::endl; 
} 

void handler2(const boost::system::error_code &ec) 
{ 
  std::cout << "5 s." << std::endl; 
} 

boost::asio::io_service io_service; 

void run() 
{ 
  io_service.run(); 
} 

int main() 
{ 
  boost::asio::deadline_timer timer1(io_service, boost::posix_time::seconds(5)); 
  timer1.async_wait(handler1); 
  boost::asio::deadline_timer timer2(io_service, boost::posix_time::seconds(5)); 
  timer2.async_wait(handler2); 
  boost::thread thread1(run); 
  boost::thread thread2(run); 
  thread1.join(); 
  thread2.join(); 
} 

The example from the previous section is now converted to a multithreaded application. Using the boost::thread class, defined in boost/thread.hpp and part of the Boost C++ Library Thread, two threads are created within main(). Both threads are calling the run() method for the single I/O service. This allows the I/O service to utilize both threads for executing handler functions once individual asynchronous operations have completed.

Both timers in the example application are set to elapse after five seconds. Since two threads are available, both handler1() and handler2() can be executed simultaneously. If the second timer elapses while the handler of the first one is still executed, the second handler is executed within the second thread. If the handler of the first timer has already terminated, the I/O service is free to choose either thread.

Threads can increase the performance of an application. Since threads are executed on processor cores, there is no sense in creating more threads than there are cores. This ensures that each thread is executed on its own core without battling with other threads for the corresponding core.

Please note that the usage of threads does not always make sense. Running the above example can result in a mixed output of the individual messages on the standard output stream since the two handlers, which may run in parallel, access a single shared resource: The standard output stream std::cout. The access needs to be synchronized in order to guarantee that each message is written completely before a different thread can write another message to the standard output stream. The usage of threads in this scenario does not provide much benefit as long as the individual handlers cannot be independently executed in parallel.

Calling the run() method of a single I/O service multiple times is the recommended way of adding scalability to any application based on Boost.Asio. Alternatively, there is a different method: Instead of binding multiple threads to a single I/O service, multiple I/O services can be created instead. Each of the I/O services then utilizes one thread. If the number of I/O services matches the number of processor cores on the system, asynchronous operations can be executed on their own core.

#include <boost/asio.hpp> 
#include <boost/thread.hpp> 
#include <iostream> 

void handler1(const boost::system::error_code &ec) 
{ 
  std::cout << "5 s." << std::endl; 
} 

void handler2(const boost::system::error_code &ec) 
{ 
  std::cout << "5 s." << std::endl; 
} 

boost::asio::io_service io_service1; 
boost::asio::io_service io_service2; 

void run1() 
{ 
  io_service1.run(); 
} 

void run2() 
{ 
  io_service2.run(); 
} 

int main() 
{ 
  boost::asio::deadline_timer timer1(io_service1, boost::posix_time::seconds(5)); 
  timer1.async_wait(handler1); 
  boost::asio::deadline_timer timer2(io_service2, boost::posix_time::seconds(5)); 
  timer2.async_wait(handler2); 
  boost::thread thread1(run1); 
  boost::thread thread2(run2); 
  thread1.join(); 
  thread2.join(); 
} 

The already known example application using two timers has now been rewritten to utilize two I/O services. The application is still based on two threads; however, each thread is now bound to an individual I/O service. Additionally, the two I/O objects timer1 and timer2 are now also bound to the different I/O services.

The functionality of the application is the same as before. It can be beneficial under certain conditions to have multiple I/O services, each with its own thread and ideally running on its own processor core, since asynchronous operations including their handlers then can execute locally. If no distant data or function needs to be accessed, each I/O service acts as a small autonomous application. Local and distant in this case refers to resources such as cache and memory pages. Since specific knowledge about the underlying hardware, operation system, compiler as well as potential bottlenecks is required before optimization strategies can be developed, multiple I/O services should only be used in scenarios that clearly benefit from them.


7.4 Network Programming

Even though Boost.Asio is a library that can process any kind of data asynchronously, it is mainly being used for network programming. This is due to the fact that Boost.Asio supported network functions long before additional I/O objects were added over time. Network functions are a perfect example for asynchronous processing since the transmission of data within a network may take longer and thus acknowledges as well as error conditions are not directly available.

Boost.Asio provides many I/O objects to develop network applications. The following example uses the boost::asio::ip::tcp::socket class to establish a connection to a different PC and download the 'Highscore' homepage; just like a browser does when pointed to www.highscore.de.

#include <boost/asio.hpp> 
#include <boost/array.hpp> 
#include <iostream> 
#include <string> 

boost::asio::io_service io_service; 
boost::asio::ip::tcp::resolver resolver(io_service); 
boost::asio::ip::tcp::socket sock(io_service); 
boost::array<char, 4096> buffer; 

void read_handler(const boost::system::error_code &ec, std::size_t bytes_transferred) 
{ 
  if (!ec) 
  { 
    std::cout << std::string(buffer.data(), bytes_transferred) << std::endl; 
    sock.async_read_some(boost::asio::buffer(buffer), read_handler); 
  } 
} 

void connect_handler(const boost::system::error_code &ec) 
{ 
  if (!ec) 
  { 
    boost::asio::write(sock, boost::asio::buffer("GET / HTTP 1.1\r\nHost: highscore.de\r\n\r\n")); 
    sock.async_read_some(boost::asio::buffer(buffer), read_handler); 
  } 
} 

void resolve_handler(const boost::system::error_code &ec, boost::asio::ip::tcp::resolver::iterator it) 
{ 
  if (!ec) 
  { 
    sock.async_connect(*it, connect_handler); 
  } 
} 

int main() 
{ 
  boost::asio::ip::tcp::resolver::query query("www.highscore.de", "80"); 
  resolver.async_resolve(query, resolve_handler); 
  io_service.run(); 
} 

The most obvious part of the application is the usage of three handlers: The connect_handler() and read_handler() functions are called once the connection has been established and while data are being received, respectively. Why is the resolve_handler() function required though?

The Internet uses so-called IP addresses to identify individual PCs. IP addresses are essentially just a lengthy number that is hard to remember. It is much easier to remember names such as www.highscore.de instead. In order to use such a name for the Internet, it needs to be translated into the corresponding IP address via a process called name resolution. The process is accomplished by a so-called name resolver which explains the name of the corresponding I/O object: boost::asio::ip::tcp::resolver.

Name resolution is a process that requires a connection to the Internet as well. Dedicated PCs, called DNS servers, act just like a phone book and know what IP address is assigned to an individual PC. Since the process itself is transparent, it is only important to understand the concept behind it and why the boost::asio::ip::tcp::resolver I/O object is therefore required. Since name resolution does not take place locally, it is also implemented as an asynchronous operation. The resolve_handler() function is called once the name resolution has either succeeded or terminated with an error.

Since receiving data presumes a successful connection which in turn presumes a successful name resolution, different asynchronous operations are started within the individual handlers. resolve_handler() accesses the I/O object sock to create a connection using the resolved address provided by the iterator it. sock is also being accessed inside of connect_handler() to send the HTTP request and to initiate the data reception. Since all of these operations are asynchronous, the names of the individual handlers are being passed as arguments. Depending on the corresponding handler, additional arguments are required such as the iterator it pointing to the resolved address or the buffer buffer storing the received data.

Once executed, the application creates an object query of type boost::asio::ip::tcp::resolver::query that represents a query containing the name www.highscore.de and the, in the Internet commonly used, port 80. This query is passed to the async_resolve() method to resolve the name. Finally, main() simply calls the run() method of the I/O service to transfer control over the asynchronous operations to the operating system.

Once the name resolution process has finished, resolve_handler() is called which checks whether or not the name could be resolved. If successfully resolved, the object ec, containing the various error conditions, is set to 0. Only in this case, the socket is accessed to create a connection accordingly. The server address is provided via the second argument of type boost::asio::ip::tcp::resolver::iterator.

After calling the async_connect() method, connect_handler() is called automatically. Inside the handler, the ec object is evaluated to check whether or not a connection has been established. In case a connection is available, the async_read_some() method is called for the corresponding socket which initiates the read operation. To store the received data, a buffer is provided as the first argument. In the given example, it is of type boost::array which is part of the Boost C++ Library Array and is defined in boost/array.hpp.

The read_handler() function is called every time one or more bytes have been received and stored within the buffer. The exact number of bytes received is given via the argument bytes_transferred which is of type std::size_t. As is the rule, the handler should first evaluate the argument ec to check for any reception error. If successfully received, the data is simply written to the standard output stream.

Please note that read_handler() calls the async_read_some() method again once the data has been written via std::cout. This is necessary since there is no guarantee that the whole homepage has been received with just one single asynchronous operation. The alternating calls of async_read_some() and read_handler() only stop if the connection has been disrupted, e.g. when the web server has transmitted the complete homepage. In this case, an error is reported inside read_handler() which prevents further data output on the standard output stream as well as further invocations of the async_read() method for this particular socket. The example application will actually terminate since no further asynchronous operations are outstanding.

While the previous example was used to retrieve the homepage of www.highscore.de, the next example actually illustrates a simple web server. The crucial difference is that the application does not connect to other PCs but rather waits for incoming connections.

#include <boost/asio.hpp> 
#include <string> 

boost::asio::io_service io_service; 
boost::asio::ip::tcp::endpoint endpoint(boost::asio::ip::tcp::v4(), 80); 
boost::asio::ip::tcp::acceptor acceptor(io_service, endpoint); 
boost::asio::ip::tcp::socket sock(io_service); 
std::string data = "HTTP/1.1 200 OK\r\nContent-Length: 13\r\n\r\nHello, world!"; 

void write_handler(const boost::system::error_code &ec, std::size_t bytes_transferred) 
{ 
} 

void accept_handler(const boost::system::error_code &ec) 
{ 
  if (!ec) 
  { 
    boost::asio::async_write(sock, boost::asio::buffer(data), write_handler); 
  } 
} 

int main() 
{ 
  acceptor.listen(); 
  acceptor.async_accept(sock, accept_handler); 
  io_service.run(); 
} 

The I/O object acceptor of type boost::asio::ip::tcp::acceptor - initialized with the protocol and the port - is used to wait for incoming connections from other PCs. The initialization happens via the endpoint object which is of type boost::asio::ip::tcp::endpoint and configures the acceptor in the example to use port 80 to wait for incoming connections of version 4 of the Internet protocol which is typically the port and protocol used for the WWW.

After initializing the acceptor, main() first calls the listen() method to put the acceptor into receive mode before it waits for the initial connection using the async_accept() method. The socket used to send and receive data is passed as the first argument.

Once a PC tries to establish a connection, accept_handler() is called automatically. If the connection request was successful, the free-standing boost::asio::async_write() function is invoked to send the information stored in data via the socket. boost::asio::ip::tcp::socket also provides a method named async_write_some() to send data; however, it will invoke the associated handler whenever at least one byte has been sent. The handler would need to calculate how many bytes are left to send and invoke async_write_some() repeatedly until all bytes have been sent. This can be avoided by using boost::asio::async_write() since this asynchronous operation only terminates after all bytes of the buffer have been sent.

Once all data has been sent, the empty function write_handler() is called in this example. Since all asynchronous operations have finished, the application is terminated. The connection to the other PC is closed accordingly.


7.5 Developing Boost.Asio Extensions

Even though Boost.Asio mainly supports network functions, adding additional I/O objects to perform different asynchronous operations is fairly easy. This section outlines the general layout of a Boost.Asio extension. While it is not mandatory, it provides a viable skeleton as a starting point for other extensions.

To add new asynchronous operations to Boost.Asio, three classes need to be implemented:

  • A class derived from boost::asio::basic_io_object representing the new I/O object. Developers using the new Boost.Asio extension will exclusively encounter this I/O object.

  • A class derived from boost::asio::io_service::service representing a service that is registered with the I/O service and can be accessed from the I/O object. The differentiation between the service and the I/O object is important since there is only one instance of the service per I/O service at any given time but a service can be accessed by multiple I/O objects.

  • A class not derived from any other class representing the service implementation. Since there is only one instance of a service per I/O service at any given time, the service creates an instance of its implementation for every I/O object. This instance manages the internal data pertinent to the corresponding I/O object.

Instead of just providing the skeleton, the Boost.Asio extension developed in this section is going to resemble the available boost::asio::deadline_timer object. The difference between the two is that the period for the timer is being passed as an argument to the wait() or async_wait() method instead of the constructor.

#include <boost/asio.hpp> 
#include <cstddef> 

template <typename Service> 
class basic_timer 
  : public boost::asio::basic_io_object<Service> 
{ 
  public: 
    explicit basic_timer(boost::asio::io_service &io_service) 
      : boost::asio::basic_io_object<Service>(io_service) 
    { 
    } 

    void wait(std::size_t seconds) 
    { 
      return this->service.wait(this->implementation, seconds); 
    } 

    template <typename Handler> 
    void async_wait(std::size_t seconds, Handler handler) 
    { 
      this->service.async_wait(this->implementation, seconds, handler); 
    } 
}; 

Every I/O object is usually implemented as a template class that is required to be instantiated with a service - typically with the service specifically developed for this I/O object. Whenever an I/O object is instantiated, the service is automatically registered with the I/O service by the parent class boost::asio::basic_io_object, unless it was already registered previously. This ensures that services used by any I/O object will only be registered once per I/O service.

The corresponding service is accessible within the I/O object via the service reference and is typically accessed to forward method calls to the service. Since services need to store data for every I/O object, an instance is automatically created for every I/O object using the service. This again happens with the aid of the parent class boost::asio::basic_io_object. The actual service implementation is passed as an argument to any method call to allow the service to specifically know which I/O object initiated the call. The service implementation is accessible via the implementation property.

In general, any I/O object is relatively simple: While the installation of the service as well as the creation of a service implementation is done by the parent class boost::asio::basic_io_object, method calls are simply forwarded to the corresponding service; passing the actual service implementation of the I/O object as an argument.

#include <boost/asio.hpp> 
#include <boost/thread.hpp> 
#include <boost/bind.hpp> 
#include <boost/scoped_ptr.hpp> 
#include <boost/shared_ptr.hpp> 
#include <boost/weak_ptr.hpp> 
#include <boost/system/error_code.hpp> 

template <typename TimerImplementation = timer_impl> 
class basic_timer_service 
  : public boost::asio::io_service::service 
{ 
  public: 
    static boost::asio::io_service::id id; 

    explicit basic_timer_service(boost::asio::io_service &io_service) 
      : boost::asio::io_service::service(io_service), 
      async_work_(new boost::asio::io_service::work(async_io_service_)), 
      async_thread_(boost::bind(&boost::asio::io_service::run, &async_io_service_)) 
    { 
    } 

    ~basic_timer_service() 
    { 
      async_work_.reset(); 
      async_io_service_.stop(); 
      async_thread_.join(); 
    } 

    typedef boost::shared_ptr<TimerImplementation> implementation_type; 

    void construct(implementation_type &impl) 
    { 
      impl.reset(new TimerImplementation()); 
    } 

    void destroy(implementation_type &impl) 
    { 
      impl->destroy(); 
      impl.reset(); 
    } 

    void wait(implementation_type &impl, std::size_t seconds) 
    { 
      boost::system::error_code ec; 
      impl->wait(seconds, ec); 
      boost::asio::detail::throw_error(ec); 
    } 

    template <typename Handler> 
    class wait_operation 
    { 
      public: 
        wait_operation(implementation_type &impl, boost::asio::io_service &io_service, std::size_t seconds, Handler handler) 
          : impl_(impl), 
          io_service_(io_service), 
          work_(io_service), 
          seconds_(seconds), 
          handler_(handler) 
        { 
        } 

        void operator()() const 
        { 
          implementation_type impl = impl_.lock(); 
          if (impl) 
          { 
              boost::system::error_code ec; 
              impl->wait(seconds_, ec); 
              this->io_service_.post(boost::asio::detail::bind_handler(handler_, ec)); 
          } 
          else 
          { 
              this->io_service_.post(boost::asio::detail::bind_handler(handler_, boost::asio::error::operation_aborted)); 
          } 
      } 

      private: 
        boost::weak_ptr<TimerImplementation> impl_; 
        boost::asio::io_service &io_service_; 
        boost::asio::io_service::work work_; 
        std::size_t seconds_; 
        Handler handler_; 
    }; 

    template <typename Handler> 
    void async_wait(implementation_type &impl, std::size_t seconds, Handler handler) 
    { 
      this->async_io_service_.post(wait_operation<Handler>(impl, this->get_io_service(), seconds, handler)); 
    } 

  private: 
    void shutdown_service() 
    { 
    } 

    boost::asio::io_service async_io_service_; 
    boost::scoped_ptr<boost::asio::io_service::work> async_work_; 
    boost::thread async_thread_; 
}; 

template <typename TimerImplementation> 
boost::asio::io_service::id basic_timer_service<TimerImplementation>::id; 

In order to be integrated with Boost.Asio, a service must fulfill a couple of requirements:

  • It needs to be derived from boost::asio::io_service::service. The constructor must expect a reference to an I/O service which is passed to the constructor of boost::asio::io_service::service accordingly.

  • Any service must contain a static public property id of type boost::asio::io_service::id. Services are identified using this property within the I/O service.

  • Two public methods named construct() and destruct(), both expecting an argument of type implementation_type, must be defined. implementation_type is typically a type definition for the service implementation. As shown in the above example, a boost::shared_ptr object can be used to easily instantiate a service implementation in construct() and to destruct it in destruct() accordingly. Since both methods are automatically being called whenever an I/O object is created or destroyed, a service can create and destroy service implementations for each I/O object using construct() and destruct(), respectively.

  • A method named shutdown_service() must be defined; however, it can be private. For common Boost.Asio extensions, this is usually an empty method. It is only being used by services that are more tightly integrated with Boost.Asio. Nonetheless, the method must be present in order to compile the extension successfully.

In order to forward method calls to the corresponding service, methods for forwarding need to be defined for the I/O object accordingly. These methods are typically named similar to the methods of the I/O object itself, e.g. wait() and async_wait() in the above example. While synchronous methods such as wait() solely access the service implementation to call a blocking method, the trick for asynchronous operations like async_wait() is to call the blocking method within a thread.

Using the asynchronous operations with the help of a thread is usually done by accessing a new I/O service. The above example contains a property named async_io_service_ of type boost::asio::io_service. The run() method of this I/O service is started within its own thread created with async_thread_ of type boost::thread inside the constructor of the service. The third property async_work_ of type boost::scoped_ptr<boost::asio::io_service::work> is required to avoid the run() method from returning immediately. This could otherwise happen since there are no outstanding asynchronous operations at creation. Creating an object of type boost::asio::io_service::work and binding it to the I/O service, which also happens inside the service constructor, prevents the run() method from returning immediately.

A service could also be implemented without accessing its own I/O service - a single thread would be sufficient. The reason for using a new I/O service for any additional thread is quite simple: Threads can communicate fairly easy with each other using the I/O service. In the example, async_wait() creates a function object of type wait_operation and passes it to the internal I/O service via the post() method. The overloaded operator()() of this function object is then called inside the thread used to execute the run() method for the internal I/O service. post() offers a simple way of executing a function object within a different thread.

The overloaded operator()() operator of wait_operation essentially performs the same work as the wait() method: Calling the blocking wait() method of the service implementation. There is, however, the possibility that the I/O object, including its service implementation, is destroyed while the thread executes the operator()() operator. If the service implementation is destroyed in destruct(), the operator()() operator should no longer access it. This is prevented by using a weak pointer known from the first chapter: The weak pointer impl_ returns a shared pointer to the service implementation if it still exists when lock() is called, otherwise it will return 0. In that case, operator()() does not access the service implementation but rather calls the handler with an error of boost::asio::error::operation_aborted.

#include <boost/system/error_code.hpp> 
#include <cstddef> 
#include <windows.h> 

class timer_impl 
{ 
  public: 
    timer_impl() 
      : handle_(CreateEvent(NULL, FALSE, FALSE, NULL)) 
    { 
    } 

    ~timer_impl() 
    { 
      CloseHandle(handle_); 
    } 

    void destroy() 
    { 
      SetEvent(handle_); 
    } 

    void wait(std::size_t seconds, boost::system::error_code &ec) 
    { 
      DWORD res = WaitForSingleObject(handle_, seconds * 1000); 
      if (res == WAIT_OBJECT_0) 
        ec = boost::asio::error::operation_aborted; 
      else 
        ec = boost::system::error_code(); 
    } 

private: 
    HANDLE handle_; 
}; 

The service implementation timer_impl uses Windows API functions and can only be compiled and used in Windows. Its purpose is rather to illustrate a potential implementation.

timer_impl exhibits two essential methods: wait() is called to wait for one or more seconds. destroy() is used to cancel a wait operation which is mandatory since the wait() method is called inside its own thread for asynchronous operations. If the I/O object, including its service implementation, is destroyed, the blocking wait() method should be canceled as soon as possible which is done using destroy().

This Boost.Asio extension can be used as follows.

#include <boost/asio.hpp> 
#include <iostream> 
#include "basic_timer.hpp" 
#include "timer_impl.hpp" 
#include "basic_timer_service.hpp" 

void wait_handler(const boost::system::error_code &ec) 
{ 
  std::cout << "5 s." << std::endl; 
} 

typedef basic_timer<basic_timer_service<> > timer; 

int main() 
{ 
  boost::asio::io_service io_service; 
  timer t(io_service); 
  t.async_wait(5, wait_handler); 
  io_service.run(); 
} 

Compared to the example application in the beginning of this chapter, this Boost.Asio extension is used similarly to boost::asio::deadline_timer. In practice, boost::asio::deadline_timer should be preferred since it is already integrated with Boost.Asio. The sole purpose of this extension was to show how Boost.Asio can be extended with new asynchronous operations.

Directory Monitor is a real-world example of a Boost.Asio extension that provides an I/O object able to monitor directories. If a file inside a monitored directory is created, modified or deleted, a handler is called accordingly. The current version supports both Windows and Linux (Kernel version 2.6.13 or higher).


7.6 Exercises

You can buy solutions to all exercises in this book as a ZIP file.

  1. Modify the server from Section 7.4, “Network Programming” to prevent it from terminating after a single request but rather processes an arbitrary number.

  2. Extend the client from Section 7.4, “Network Programming” to immediately parse the received HTML code for a URL. If found, the corresponding resource should be downloaded as well. For this exercise, the first URL found should be utilized. Ideally, the website as well as the resource should be saved in two files rather than writing both to the standard output stream.

  3. Create a client/server application to transmit a file between two PCs. Once the server is started, it should display the IP addresses of all local interfaces and wait for client connections. One of the available server IP addresses as well as the name of a local file should be passed to the client as command-line arguments. The client should transmit the file to the server which in turn saves it accordingly. While transmitting, the client should provide some visual indication of the progress to the user.