所以 – 我正在使用下一个构造函数创建acceptor对象:
_acceptor(_io_service,boost::asio::ip::tcp::endpoint(boost::asio::ip::tcp::v4(),port ),true)
开始在这里听(start_accept):
_new_connection.reset(new connection(*_io_services.front(),_connection_id)); _acceptor.async_accept(_new_connection->get_socket(),boost::bind(&server::handle_accept,this,boost::asio::placeholders::error));
和handle_accept
if (!error) { _new_connection->start(); } // continue waiting for incoming connections start_accept();
通常,我接受传入连接的代码与HTTP Server 2示例中的相同
问题仅在第一个传入连接未关闭时出现,然后第二个传入将排队等待,直到第一个连接将关闭.
根据这两个答案:
boost::asio acceptor reopen and async read after EOF
How to launch an “event” when my Boost::asio tcp server just start running ( AKA io_service.run() )?
acceptor对象将所有传入的连接添加到队列中,并且在挂起的连接不会被关闭之前不会接受它们.
我希望立即处理所有传入连接 – 因此它们在接受器队列中没有待处理,到目前为止我找不到任何解决方案.
能帮助我,实现这个目的的正确方法是什么?
connection-> start()函数
void connection::start() { _bsocket.async_read_some(boost::asio::buffer(_bbuffer),boost::bind(&connection::handle_browser_read_headers,shared_from_this(),boost::asio::placeholders::error,boost::asio::placeholders::bytes_transferred )); }
Graphical representation
更新:提升asio日志
@asio|1368460995.389629|0*1|socket@00CCFBE4.async_accept @asio|1368461003.855113|>1|ec=system:0 @asio|1368461003.855113|1*2|socket@00E26850.async_receive @asio|1368461003.855113|>2|ec=system:0,bytes_transferred=318 @asio|1368461003.856113|1*3|socket@00CCFBE4.async_accept @asio|1368461003.856113|<1| @asio|1368461003.856113|2*4|resolver@00E268D8.async_resolve @asio|1368461003.856113|<2| @asio|1368461003.866114|>4|ec=system:0,... @asio|1368461003.866114|4*5|socket@00E26894.async_connect @asio|1368461003.868114|<4| @asio|1368461004.204133|>5|ec=system:0 @asio|1368461004.204133|5*6|socket@00E26894.async_send @asio|1368461004.204133|<5| @asio|1368461004.204133|>6|ec=system:0,bytes_transferred=302 @asio|1368461004.204133|6*7|socket@00E26894.async_receive @asio|1368461004.204133|<6| @asio|1368461004.613156|>7|ec=system:0,bytes_transferred=16384 @asio|1368461004.613156|7*8|socket@00E26850.async_send @asio|1368461004.614157|<7| @asio|1368461004.614157|>8|ec=system:0,bytes_transferred=16384 @asio|1368461004.614157|8*9|socket@00E26894.async_receive @asio|1368461004.614157|<8| @asio|1368461004.614157|>9|ec=system:0,bytes_transferred=1946 @asio|1368461004.614157|9*10|socket@00E26850.async_send @asio|1368461004.614157|<9| @asio|1368461004.614157|>10|ec=system:0,bytes_transferred=1946 @asio|1368461004.614157|10*11|socket@00E26894.async_receive @asio|1368461004.614157|<10| @asio|1368461004.618157|>11|ec=system:0,bytes_transferred=14080 @asio|1368461004.618157|11*12|socket@00E26850.async_send @asio|1368461004.619157|<11| @asio|1368461004.619157|>12|ec=system:0,bytes_transferred=14080 @asio|1368461004.619157|12*13|socket@00E26894.async_receive @asio|1368461004.619157|<12| @asio|1368461019.248994|>13|ec=asio.misc:2,bytes_transferred=0 @asio|1368461019.248994|13|socket@00E26894.close @asio|1368461019.248994|13|socket@00E26850.close @asio|1368461019.248994|<13| @asio|1368461019.253994|0|resolver@00E268D8.cancel @asio|1368461019.253994|>3|ec=system:0 @asio|1368461019.253994|3*14|socket@00E32688.async_receive @asio|1368461019.254994|3*15|socket@00CCFBE4.async_accept @asio|1368461019.254994|<3| @asio|1368461019.254994|>14|ec=system:0,bytes_transferred=489 @asio|1368461019.254994|14*16|resolver@00E32710.async_resolve @asio|1368461019.254994|<14| @asio|1368461019.281995|>16|ec=system:0,... @asio|1368461019.281995|16*17|socket@00E326CC.async_connect @asio|1368461019.282996|<16| @asio|1368461019.293996|>17|ec=system:0 @asio|1368461019.293996|17*18|socket@00E326CC.async_send @asio|1368461019.293996|<17| @asio|1368461019.293996|>18|ec=system:0,bytes_transferred=470 @asio|1368461019.293996|18*19|socket@00E326CC.async_receive @asio|1368461019.293996|<18| @asio|1368461019.315997|>19|ec=system:0,bytes_transferred=11001 @asio|1368461019.315997|19*20|socket@00E32688.async_send @asio|1368461019.349999|<19| @asio|1368461019.349999|>20|ec=system:0,bytes_transferred=11001 @asio|1368461019.349999|20|socket@00E326CC.close @asio|1368461019.349999|20|socket@00E32688.close @asio|1368461019.349999|<20| @asio|1368461019.349999|0|resolver@00E32710.cancel
我发现acceptor的行为取决于我用于从服务器套接字读取数据的函数.连接类从浏览器读取数据,修改请求URL,连接到主机并发送请求,然后从服务器读取响应并将其写回浏览器.所以在我需要读取服务器主体的那一刻 – 我使用这个功能
_ssocket.async_read_some(boost::asio::buffer(_sbuffer),boost::bind(&connection::handle_server_read_body,boost::asio::placeholders::bytes_transferred ));
如果在服务响应头中没有指定content-length,我会读到EOF.如果调用async_read_some函数并且在套接字上没有更多数据要读取它,则等待〜15秒后才会引发EOF.接受者不接受此15秒内的所有新传入连接.
但如果我正在使用async_read的另一个变种 –
boost::asio::async_read(_ssocket,boost::asio::buffer(_sbuffer),boost::asio::placeholders::bytes_transferred ));
传入的连接被接受就好了.但是它的boost :: asio :: async_read的工作速度有点慢,它等待从socket读取数据,并且在读取数据之前不会调用处理程序,所以 – 我想我会指定transfer_at_least
boost::asio::async_read(_ssocket,boost::asio::transfer_at_least(1),boost::asio::placeholders::bytes_transferred ));
是的,它变得更好 – 但是接受新连接的问题又回来了:/
– async_read_some和boost :: asio :: async_read之间的真正区别是什么 – 感觉有些东西被阻止了.
解决方法
boost::asio::async_read( socket(),boost::asio::buffer( _incoming ),boost::asio::transfer_at_least( 1 ),boost::bind( &server_class::handle_read,boost::asio::placeholders::bytes_transferred ) );
然后我将收到的任何内容转储到解析器中,以确保它是理智的(处理状态等).
否则,我相信我正在做你正在做的一切,基于我在这里看到的.
如果这有效,那么似乎asio的行为不直观.