Importing Historical Data
Importing data is a common process that involves moving data that is on your device to the cloud. This differs from the Server in that you can import historical data as well. The server only supports streaming the latest data.
Here are common use cases for using
- You have historical data on your machine that you would like to visualize in your cloud notebooks.
- You have various Google Sheets or logs that you've kept manually and are ready to migrate them to the cloud.
- You have offline sensors that manually require data transfer and need to upload that data to the cloud.
Using a Start Date
You can easily import your data using the following command. Please keep in mind you must first setup a Source, Target, and Link before running this command.
You must specify a
--start-date with this command. Please see the examples below.
sparky import --start-date 2019 # starting 2019-01-01 00:00:00 UTC sparky import --start-date 2019-10 # starting 2019-10-01 00:00:00 UTC sparky import --start-date "2019-10-01 10:00:00" # starting 2019-10-01 10:00:00 UTC
In the event that you need to manually transfer your data from the sensor to your desktop, you can simply move your files to their respective directories (where you're storing all your sensor data) and then run this command. After it finishes processing you'll have all your data in the cloud!
You probably have different sensors in your farm and need to relay several metrics to the cloud. To do this we recommend having separate directories for each sensor. This allows a single source configuration and keeps everything nice and organized.
It may look something like this:
/logs /sensor1 2021-01-01.csv 2021-01-02.csv /sensor2 2021-01-01.csv 2021-01-04.csv ...