Ingested date not appearing correctly formatted

Hi,
I have a date that looks like: 6/22/2021
When I ingest it I use the format: “M/dd/yyyy” using the {“format”:”M/dd/yyyy”}
The data shows us as 6/21/2021 or a day behind what the file has in it. I checked the computer date, os date, browser date, and advanced settings. All seem correct. Any ideas? Thanks!

Hi Jeff,

Please verify these Advance settings in Siren:

Have this format type under the custom mapping while ingesting the data:

{
“format”: “M/dd/yyyy”
}

If this doesn’t work please share the sample data file which you are ingesting.

Regards
Manu Agarwal

Hi, I’m still not having success with the ingesting the date off this doc (see below) even when using the elastic pipeline options. Also, I’d like to know more about how the elastic pipeline features work in conjunction with the “custom mapping”… for example… does a pipeline feed the data into the custom mapping when i press “import”? If I don’t use “custom map” how does pipeline respond on “import”… does it just pipeline transform the data into the auto interpreted map? What if I transform using pipeline a field and then the mapping doesn’t match what the field is transformed into? I’m trying to better understand the elastic pipeline features and processors because I think there is alot of additional things I could with those.

date, text1,text2,text3,text4,float1,float2,float3
06-22-2021,BOB,“BOB Inc”,88160R101,3506655.00,217431432.65,9.58

found the ingest problem with the date. there was a malformed char. I’d still like some more info on how to use pipelines better.

Hi Jeff,

Each processor in elasticsearch runs sequentially, making specific changes to incoming documents. After the processors have run, Elasticsearch adds the transformed documents to your data stream or index . By default, pipeline processing stops when one of these processors fails or encounters an error.

To ignore a processor failure and run the pipeline’s remaining processors, set ignore_failure to true .

The mapping for the fields received from the datasource is auto-detected, however the mapping can be changed to other valid field types and new fields may be added to accommodate fields added by a transformation pipeline that are not auto-detected.

In case you don’t use the custom map then it will use the pipeline definition you used for fields and remaining will be auto mapping.

For more details about the elasticsearch pipeline processor please check here

Regards
Manu Agarwal

1 Like