Hello, I'm building a spark app required to read large amounts of log files from s3. I'm doing so in the code by constructing the file list, and passing it to the context as following: val myRDD = sc.textFile("s3n://mybucket/file1, s3n://mybucket/file2, ... , s3n://mybucket/fileN") When running it locally there are no issues, but when running it on the yarn-cluster (running spark 1.1.0, hadoop 2.4