My GSoC journey @Open-Astronomy

Introduction

So for people who don’t know, Google Summer of Code is a global program by Google where students and developers get the chance to work on real open-source projects with mentors from around the world. It’s not just about coding it’s about learning how to collaborate, contribute to big projects, and ship something useful. And yes, there’s also a stipend, which makes it even more exciting.

Well, the first time I heard about GSoC was in my first year of college, but I didn’t give it much attention. I thought it would be really hard to get in. Then, in my second year, two of my friends got accepted, and that really inspired me. It made me realize it was actually possible.

Starting Early

So I started searching deep, even before the orgs got announced. I really wanted to get in, since there aren’t many internships out there that give both good experience and a stipend. I navigated through different orgs' codebases, read a lot of issues and PRs from different orgs to understand how OSS contributions are done, but I didn’t pick a specific org at that point.

Read more…

Time Series Analysis in Stingray.jl

From Stargazing Dreams to Code Stargazing:)

Ever since childhood, I have been passionate about the mysteries of the universe. Over time, that passion grew into something more technical — not just looking at the stars, but wanting to understand their signals.

This summer, through Google Summer of Code 2025 (GSoC), I had the opportunity to contribute to Stingray.jl, a Julia library for time series analysis in astronomy. What follows is both a story of my personal journey and an exploration of the science and code I worked on.

The Beginning – Switching Languages and First Steps

When the organization was announced, I immediately started exploring Stingray (the Python version). I wanted to understand the structure, so I made my first PRs there.

Soon, I understood the Stingray a bit, then I started working on Stingray.jl, the Julia port of Stingray initiated by Aman Pandey in 2022. That meant learning Julia from scratch! Thankfully, Julia was similar to Python in many ways, so I picked it up quickly.

During my early testing, I noticed outdated dependencies — for example, some packages were only compatible with Julia 1.7, while the project was moving towards Julia 1.11. My first clumsy PR (on power_colors) wasn’t great, will tell you in a bit,t but it gave me valuable lessons.

Later, I worked on updating fourier.jl. This was also my first time seriously using structs in Julia. My mentor @fjebaker guided me on abstract methods and struct design, while @matteobachetti helped me understand what kind of astronomical data the project needed.

At first, I was confused about telescopes and their instruments. To learn, I went to NASA HEASARC documentation, where I discovered FITS data (Flexible Image Transport System) — the standard format for astronomical data. That’s when things started making sense.


My First Big Task – Reading Event Data

@fjebaker opened a milestone and assigned me the readevent task. At the same time, I became good friends with fellow contributor Jawd Ahmad. We often helped each other understand the codebase.

I implemented the readevent function with a small struct called
EventList something like :

Read more…

Google Summer of Code: Final Submission

Google Summer of Code @ OpenAstronomy

I have completed my Google Summer of Code project, “Fast Parsing of Large Databases and Execution Bottlenecks,” on the Radis project under the OpenAstronomy umbrella. The project developed high-performance, line-by-line parsing for high-resolution infrared molecular spectra, and this is the final blog posts to document the results and lessons learned. Most important of all, I loved the work I was able to do; it was interesting and made possible by my helpful and amazing mentors. I am incredibly grateful to Dr. Nicolas Minesi, Dr. Dirk van den Bekerom and Tran Huu Nhat Huy.

Project Description?

The problem is that the HITEMP CO₂ spectroscopic database is extremely large and inefficient to work with: the distributed file is about 6 GB when compressed but expands to roughly 50 GB, and the existing workflow requires fully decompressing and then batch-parsing the entire file an operation that takes on the order of 2.5 hours and uses a lot of disk I/O, memory, and network bandwidth. That full-decompress/parse approach creates several practical bottlenecks: it forces users to have large, fast storage and long processing windows, makes quick exploratory analysis or iterative development impractical, wastes bandwidth and time when only small portions of the data are needed. In short, the dataset’s size and the parser’s all-or-nothing design prevent efficient, selective access and slow down every downstream analysis that depends on these spectral lines.

Project Walk Through

After discussing the project we divided it into three parts. The first part was optimizing the existing code so that, at a minimum, we would have a better working infrastructure. The second part was enabling partial downloads so a user can retrieve only the necessary part of the file without downloading it entirely. The last part was building a C++ Single Instruction, Multiple Data (SIMD) parser using Intel intrinsic. Below I have described each of these in detail.

Read more…

Building a super-fast SIMD parser for dataset - The final episode

Welcome to the last episode of my Google Summer of Code series. In the previous post I showed how I could seek inside a large .bz2 file and decompress a region to get about 500 megabytes of raw data. That worked, but it still required downloading the full 6 gigabyte compressed file up front. After talking with the maintainers we switched to a partial-download approach: a user requests a region and the system downloads only the 45–65 megabytes of compressed bytes that decompress to the exact 500 megabyte window we need. That change took some extra work, but it makes the system feel immediate for new users, you can get hundreds of thousands of parsed rows in a couple of minutes without pulling the whole archive.

When building a parser that aims to beat Pandas’ vectorized operations, single-threaded concurrency isn’t enough. Concurrency is about handling multiple tasks by rapidly switching between them on a single core. It gives the illusion of things happening in parallel, but at any given instant only one task is actually running. That’s why it feels like multitasking in everyday life where you’re switching back and forth, but you’re not truly doing two things at the same time.

True parallelism, on the other hand, is about dividing independent work across multiple cores so that tasks literally run simultaneously. Each task makes progress without waiting for others to finish, which is what makes SIMD vectorization or multiprocessing so powerful for workloads like parsing large datasets.

Read more…

RADIS Web App Final GSoC Blog

In this final update, I’ll be covering the latest improvements.

Option to run the backend locally with Docker:

  • Simply follow the steps in the app to get started:
  • Improve performance if local machine is powerful.
  • You can even mount your existing databases (if you’ve used the RADIS package) by running the container with the following command:
docker run -d -p 8080:8080 \
-v /home/mohy/.radisdb:/root/.radisdb \
-v /home/mohy/radis.json:/root/radis.json \
radis-app-backend

Moved to SpectrumFactory instead of the simpler calc_spectrum:

  • This enables GPU-based calculations if implemented in the future.
  • Also improves ExoMol database performance by avoiding unnecessary broad file downloads.

UI improvements:

  • Better layout and space utilization to make the graph display larger and clearer.

There is a new option to choose between using all isotopes or only the first one.

Testing

I’ve added tests for all the new features on both the frontend and backend, and ensured coverage of the core existing functionalities.

Overall, it has been a fantastic journey working on this project with such a supportive community and mentors.

Read more…

We’re onto something.

I know I’m a bit late with this update, but I’ve been deep in the process of making everything work. The pieces we’ve been putting together since the beginning of the project are finally falling into place.

Since my last blog post, I resolved the broken data flow during the parsing of the states file. Now, all the necessary data flows cleanly through to the spectrum calculation step.

I also began getting some calculated values but something was off. Instead of showing normalized populations per electronic state, it was printing partition function values. Once corrected, the values made sense and were properly normalized.

Read more…

Bench-marking Partial Decompression

Hello everyone, and welcome back to another episode of my Google Summer of Code project series! In my previous post, I introduced a clever partial decompression mechanism. If you haven’t had a chance to read it yet, check it out here. I promise it’s worth your time. In this post, I’ll share the benchmarks and findings for the new functionality.

Both the previous and new implementations require downloading the full 6 GB file before processing, so download time is excluded from all benchmarks below. In the original approach, after downloading, the entire file is parsed into a DataFrame and stored in HDF5/H5 format. As expected, this process is painfully slow when you only need to extract, say 2.5 GB out of 50 GB worth of data from the decompressed stream. This is where my partial decompression logic shines.

Parsing 2.5 GB Without Cache

Below is the comparison of parsing 2.5 GB of data (without cache) using the old mechanism versus my new solution:

Read more…

RECIPES RECIPES WHAT KIND OF RECIPES? :)

 

Exploring Light Curve Plotting in Stingray.jl: Recipes and Examples:

In my continued work with Stingray.jl, so hello everyone, let's learns about plotting my favorite topic:)

Light curves are essential in high-energy astrophysics, as they represent the brightness of an astronomical object as a function of time. Precise visualization and filtering of these curves help astronomers perform accurate timing analysis, detect variability, and identify astrophysical phenomena.

This post demonstrates how to generate and customize light curve plots using Stingray.jl, leveraging real NICER datasets.

Read more…

Fitting Feature Now Available in the RADIS App

The main goal of fitting is to find the best values for unknown parameters (like temperature Tgas, mole fraction, etc.) that make your theoretical (simulated) spectrum match the experimental data as closely as possible.

This feature was not previously available in the app, but it is now (once the PR gets merged).

To use it: Activate Fit Spectrum Mode from the header to open the fitting form.

Read more…