Followers

Thursday, May 7

Could not send data to output queue (parsingQueue), retrying...

Could not send data to output queue (parsingQueue), retrying...


The TailingProcessor message means that it was unable to insert data into the parsingQueue, which, as you might guess, is where event parsing happens. This only happens when the queue is full - to confirm this, run 'grep blocked=true /opt/splunk/var/log/splunk/metrics.log*' and you should see many results.
Data travels through Splunk's queues pretty much linearly - meaning that if a queue further down the line (indexQueue) is blocked, it will eventually fill and block the queues that are feeding it (parsingQueue). 

Note :         The metrics.log output will indicate which queues are blocked.

Below are some checks to perform :

Does metrics.log on the indexer (receiver of the forwarded data) also contain blocked=true messages?

  • If so, the problem is likely on the indexing side only - perhaps the indexing box is overloaded (slow disks, etc).


If not: does metrics.log on the forwarder indicate that only the parsingQueue is blocked, or indexQueue as well?
  • Seeing only the parsingQueue would indicate that something along the lines of too much time spent running complex regexes on the data.
  • A blocked indexQueue would mean the data isn't being sent to the indexer fast enough. What type of link is between the forwarders and indexer? How much bandwidth are you actually seeing being used by the forwarding port?

Possible Solution / Common Workaround : 

1. It's certainly possible that the link between your Forwarder and Indexer is having issues.This indicates either network problems OR problems with the config of outputs.conf on the forwarder (to get to the indexer) or inputs.conf in the indexer (to receive from the forwarder).

2. Network port is not opened between your Forwarder and the Indexer

3. Stop the Forwarder remove the metric logs
  from /opt/splunk/var/log/splunk/metrics* and re-start the Forwarder

Splunk starts blocking when free space falls below 2GB and it sounds like this was the cause of your problem. 
Just deleting the files will work as a temporary fix but they will grow back again :)

You can change the default in "/opt/splunk/etc/system/local/server.conf"

[diskUsage]
minFreeSpace = 1000

4. This error may be caused by a mis-configured outputs.conf/inputs.conf.
Ensure the listening port on the Indexer is a configured receiver for spunktcp and not the port of splunkd.


5. We may have to up ulimits on the OS and increase the max_fd in limits.conf
   on the Splunk Forwarder 



6. We can add entries as below in the $SPLUNK/etc/system/local/server.conf file to increase the parsing queue :

  [queue=parsingQueue] 
    maxSize = 10MB


If your problem is not yet solved then click on the below link :

               Part Two Solutions



Hope you get rid of the below message from being appearing in your log :

Could not send data to output queue (parsingQueue), retrying...

Happy Splunking !!




No comments: