AWS Setting guides

Dataverse supports AWS S3 and EMR. If you want to use Dataverse with them, follow this step!

Prerequisites

SPARK_HOME is required for the following steps. please make sure you have set SPARK_HOME before proceeding. You can find setting up SPARK_HOME guideline in this page.

1. Check hadoop-aws & aws-java-sdk version

hadoop-aws

The version must match with hadoop version. you can check your hadoop version by running below command. while writing this README.md the hadoop version was 3.3.4 so the example will use 3.3.4 version.

>>> from dataverse.utils.setting import SystemSetting
>>> SystemSetting().get('HADOOP_VERSION')
3.3.4

aws-java-sdk

The version must be compatible with hadoop-aws version. Check at Maven Apache Hadoop Amazon Web Services Support Β» 3.3.4 Compile Dependencies section. (e.g. hadoop-aws 3.3.4 is compatible with aws-java-sdk-bundle 1.12.592)

2. Download hadoop-aws & aws-java-sdk version

Download corresponding version of hadoop-aws and aws-java-sdk jar files to $SPARK_HOME/jars directory.

Option A. Manual Setup

Option B. Use Makefile of Dataverse

Makefile can be found on repository of Dataverse [Link].

3. Set AWS Credentials

Currently we do not support ENV variables for AWS credentials but this will be supported in the future. Please use aws configure command to set your AWS credentials and this will set ~/.aws/credentials file which is accessible by boto3.

  • aws_access_key_id

  • aws_secret_access_key

  • region

If you have session token:

When you have temporary security credentials you have to set session token too.

🌠 Dataverse is now ready to use AWS S3!

now you are ready to use Dataverse with AWS! Every other details will be handled by Dataverse!

Last updated