Vastly increasing data volumes and data complexity imply that researchers require novel solutions to large scale cluster computing. This workshop aims to provide an introduction and hands-on experience with our new solution to this problem: “a new and versatile platform for high-throughput data processing”. The objective of the workshop is to distinguish the different functionalities of the platform, to provide information on applying for access, and to give participants hands-on experience with basic job processing, software distribution and portability, using internal and external storage systems, and how to collaborate on data analysis within a project. Participants will acquire first hand experience with the flexibility, interactivity and interoperability offered by the platform.
Who: Anyone who wants to start processing large data volumes (tens to hundreds of terabytes or even more)
When: Aug 28, 2019. Add to your Google Calendar.
Requirements: Participants must bring a laptop with a Mac, Linux, or Windows operating system (not a tablet, Chromebook, etc.). Basic knowledge of UNIX command-line, bash scripting and cluster computing is expected.
Find a more detailed description in the syllabus description below or on the workshop webpage.