[ Home  |  FAQ-Related Q&As  |  General Q&As  |  Answered Questions ]


    Search the Q&A Archives


I am writing an application to measure UDP packet loss over...

<< Back to: TCP/IP FAQ; Frequently Asked Questions (1999-09) Part 1 of 2

Question by lconklin
Submitted on 1/23/2004
Related FAQ: TCP/IP FAQ; Frequently Asked Questions (1999-09) Part 1 of 2
Rating: Not yet rated Rate this question: Vote
I am writing an application to measure UDP packet loss over our IP network.

What I am having a problem with is on the sending side of the equation. I am sending a UDP message every five seconds to 10 servers in different locations. When I run snoop on the sending side I occasionally see that the packet never leaves our server. I can tell this because snoop shows the last time between udp messages. So for most of my messages I see a time of either 4.99991 or 5.0014. But what I also see is a time of 10.0001 or 9.99987, some thing like that. I not getting any errors back on any of the socket calls. I know that when the message is placed on the IP stack socket calls won't return any errors messages since this is as far as they go on the OSI. We also see packets leave our server and never make it to the other servers which is acceptable since this is an actual udp packet loss. We have no congestion on our network so we are expecting a packet loss of .01 or lower. I have not been able to read anything that says why the sending udp packet message would not leave our server and be seen by snoop.

One thing I did which seems to have helped this project I used setsockopt and increased the sending queue size to 65535. It has helped but has not solved the problem. Does anyone have any suggestions?


Answer by anonymous
Submitted on 9/11/2006
Rating: Not yet rated Rate this answer: Vote
I encountered the same problem.

 

Answer by Yaniv
Submitted on 10/29/2006
Rating: Not yet rated Rate this answer: Vote
Notice that UDP packetloss on the sender side are a real cause for some packet loss.
This of it this (simplified) way: if your application sends a UDP packet and it cannot be sent, it may be dropped (and lost). Setting the sending queue (for UDP) would not be very helpful.

Suggestions:
1. Space out your packets. Rather than send out 10 packets to all servers every 5 seconds, send a packet every half-second.
2. Increase the thread priority of the sending thread
3. Try to ensure that the sending system is not overly busy - no excessive CPU usage, no competing packets being sent, no thrashing or unnecessary disk swapping

Hope this helped,
Yaniv - http://www.yanivpessach.com

 

Your answer will be published for anyone to see and rate.  Your answer will not be displayed immediately.  If you'd like to get expert points and benefit from positive ratings, please create a new account or login into an existing account below.


Your name or nickname:
If you'd like to create a new account or access your existing account, put in your password here:
Your answer:

FAQS.ORG reserves the right to edit your answer as to improve its clarity.  By submitting your answer you authorize FAQS.ORG to publish your answer on the WWW without any restrictions. You agree to hold harmless and indemnify FAQS.ORG against any claims, costs, or damages resulting from publishing your answer.

 

FAQS.ORG makes no guarantees as to the accuracy of the posts. Each post is the personal opinion of the poster. These posts are not intended to substitute for medical, tax, legal, investment, accounting, or other professional advice. FAQS.ORG does not endorse any opinion or any product or service mentioned mentioned in these posts.

 

<< Back to: TCP/IP FAQ; Frequently Asked Questions (1999-09) Part 1 of 2


[ Home  |  FAQ-Related Q&As  |  General Q&As  |  Answered Questions ]

© 2008 FAQS.ORG. All rights reserved.