| eval lookup = case(mvcount(lookup) > 1, mvjoin(lookup, " + "), lookup = "XYZ.csv", lookup. If you don't care about ABC.csv only, you can eliminate in the if function, like | inputlookup ABC.csv In the scenario you described, you will get a table like lookup | eval lookup = if(mvcount(lookup) > 1, mvjoin(lookup, " + "), lookup. Using this match, you can enrich your event data with additional fields. Splunk lookup feature lets you reference fields in an external CSV file that match fields in your event data. ![]() | stats values(lookup) as lookup by Firewall_Name These lookup table recipes briefly show the advanced solutions to a common and real-world problem. Using a similar aggregation as but just do counts. Table should almost never be used as not the last command in the chain.In other words, you only want count of matching records as well as count of non-matching records. The table command moves the processing of your events from the indexers to a searchhead which kills any parallelization for the commands further down the stream. In fact you almost never want to do search | table | anything at all. One more thing - don't do search | table | lookup. One approach which you already tried is to list all the events from the main search, do a lookup to fill the destination field (in your case it would be that freq_count field) and filter to include only the results with that field filled.īut that approach has its downside - you have to process all the huge set of results from the main search.Īs an alternative approach you can simply use a subsearch to generate a list of jobNames | table. As I understand you want tomreturn only those events from the "main search" which are included in your lookup. I'm not sure however which approach would be more effective here (you'd have to check for yourself). You had jobname in your events and jobName in the lookup. Your first attempt might be failing simply due to field names case mismatch. Sub search that will list a smaller number of jobNames that are used in last 3 months :Įarliest=-90d index="log-13120-prod-c" sourcetype="autosys_service_secondary:app" OR "autosys_service_primary:app" "request:JobSearch" installation="P*" NOT"*%*" | stats count as freq_count by jobName Main query earliest=-7d index=log-13120-nonprod-c laas_appId=qbmp.prediction* "jobPredictionAnalysis" prediction lastEndDelta | table jobname, prediction_status, predicted_end_time You can use the asterisk ( ) as a wildcard to specify a list of fields with. ![]() Other option is to somehow combine, join main query with a sub search instead of a lookup file. Monitoring Splunk Using Splunk Splunk Search Reporting Alerting. Na_prod_secure-ist-indexer-1_.com-23000] Streamed search execute failed because: Error in 'lookup' command: Could not construct lookup 'freq_used_jobs_bmp_3months.csv, jobName, output, freq_count'. | lookup freq_used_jobs_bmp_3months.csv jobName output freq_count I want to operate and write SPL queries on this list of jobNames only. I tried to join main query with this inputfile. Main SPL that runs on millions of jobnames :Įarliest=-7d index=log-13120-nonprod-c laas_appId=qbmp.prediction* "jobPredictionAnalysis" prediction lastEndDelta | table jobname, prediction_status, predicted_end_timeįreq_used_jobs_bmp_3months.csv which is a simple two columnar file I want to do a match between a CSV file and my SPLUNK search In the CSV file, I want that the field 'host' which correspond to a list of computers name match with my searches It means that for every host I want to match the free disk space, the date of lastlogon and last reboot etc. I have created a lookup.csv for this 16,000 list of jonames and want to run my search on it. ![]() I want to my SPL to read through a list of jobnames from a different query and use it as subsearch I have a SPL query that runs on an index, sourcetype which has milions of jobnames.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |