Open Access
Issue
J. Space Weather Space Clim.
Volume 12, 2022
Article Number 20
Number of page(s) 22
DOI https://doi.org/10.1051/swsc/2022020
Published online 21 June 2022

© O. Stepanyuk et al., Published by EDP Sciences 2022

Licence Creative CommonsThis is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1 Introduction

Solar eruptive events are complex phenomena, which most often include solar flares, filament eruptions, coronal mass ejections (CMEs), and CME-driven shock waves. CME-driven shocks in the corona and interplanetary (IP) space are considered the main producer of solar energetic particles (SEPs) primarily via the diffusive shock acceleration (DSA) and shock drift acceleration (SDA) processes (Reames, 2021).Considerable effort has been expended in characterizing and modeling the various aspects of SEP acceleration under idealized conditions (Vainio & Laitinen, 2008; Sokolov et al., 2009; Kozarev et al., 2013). Recent efforts have focused on building more realistic models for CME-driven shocks and their acceleration of SEPs, directly driven by real data (Vourlidas et al., 2012; Kwon et al., 2014; Kozarev et al., 2015, 2019).

Knowledge of how CME-driven shocks interact with the three-dimensional coronal magnetic fields and plasma is crucial for understanding how efficiently they accelerate particles, as well as how much SEP fluxes may spread in heliospheric longitude and latitude (Rouillard et al., 2016). A specific requirement for modeling SEP acceleration by DSA is deducing the shape of the coronal shock fronts as accurately as possible from the observations. This is because the local orientation of the magnetic field to the shock front strongly modulates the particle acceleration process (Guo & Giacalone, 2013). Furthermore, it has been shown that often in the early stages of eruptions the CME over-expands in the lateral direction, changing the overall shape of the compressive front ahead of it (Bein et al., 2011; Temmer, 2016). Thus, it is imperative to move beyond the idealized descriptions of the shock surfaces regularly used to model their evolution (Vourlidas et al., 2012; Kwon et al., 2014; Rouillard et al., 2016), and to employ more advanced techniques.

Characterizing coronal shocks is possible using remote measurements from instruments on modern heliospheric space missions. Extreme ultraviolet (EUV) imaging with telescopes such as the EUV Imagers (EUVI; Wuelser et al., 2004) on the STEREO mission and the Atmospheric Imaging Assembly (AIA; Lemen et al., 2012) instrument on the Solar Dynamics Observatory (SDO) has proven particularly useful for detecting and characterizing large-scale shocks, also known as EUV waves or Coronal Bright Fronts (CBF; Long et al., 2011), because of its unprecedented temporal and spatial resolution, and multi-wavelength coverage. Such time-dependent characterization can further be used to inform models of energetic particle acceleration in the solar corona and inner heliosphere (Kozarev & Schwadron, 2016; Kozarev et al., 2017, 2019). The development of automated solar feature-detection and identification methods has increased in recent years due to the demand for Big Data. An overview of fundamental image processing techniques used in these algorithms is presented in Aschwanden (2010). These techniques are used to detect many features in various types of observations at different heights in the solar atmosphere (Pérez-Suárez et al., 2011). Some previous work has focused on detecting sunspot groups and active regions in photospheric continuum images, magnetograms, and EUV images (Curto et al., 2008).

The issue with most algorithms currently used in solar feature detection and tracking is the complexity of their processing chains, making them difficult to implement; they are used for very specific tasks and often they are applicable only to data from a single instrument. Specifically, the task of EUV wave recognition and tracking is complicated by their much weaker intensity compared with most other solar features, next to which they propagate and project. A number of algorithms for EUV wave detection of varying complexity exist (Podladchikova & Berghmans, 2005; Verbeeck et al., 2014; Long et al., 2014; Ireland et al., 2019). Of these, perhaps the most advanced are the CorPITA (Long et al., 2014) and AWARE (Ireland et al., 2019). The CorPITA algorithm employs percentage base difference images to fit multi-Gaussian shapes to flare-source-centered, sector-averaged portions on the wave along the solar disk. The AWARE algorithm uses more advanced pre-processing in persistence running different images to characterize the wave shapes along similar flare-centered sectors and a random sample consensus (RANSAC) algorithm to select the features. However, using running difference images introduces spurious features and is not recommended for discovering the true (projected) wave shape (Long et al., 2011). In addition, the persistence imaging approach tends to amplify the noise in the output, requiring additional post-processing (median filtering and closing operations). While both these algorithms are well automated based on flare onset signal, they have so far been applied only on the solar disk. What is more, their procedures are focused specifically on EUV waves.

In contrast, here we present Wavetrack – a generalized yet flexible framework for solar feature detection, implemented as an object-oriented, easy to integrate, and freely-available Python library1. At its core is a general method for recognition and tracking of features on solar images, such as CME shock waves and filaments. The different applications may be easily controlled by modifying a few threshold parameters. The calculation scheme is based on a multi-scale data representation concept (Starck & Murtagh, 2002), combined with an à trous wavelet transform (Akansu, 1991; Holschneider et al., 1989), and a set of image filtering techniques.

In recent years Machine Learning methods have become more frequently applied in solar physics. For instance, Szenicer et al. (2019) used a combined CNN to produce the EUV irradiance map from SDO/AIA images, Li and Zhu (2013) used a multi-layer model to predict solar flares based on sequential sunspot data, Kim et al. (2019) applied generative adversarial networks (GAN) to generate the magnetic flux distribution of the Sun from SDO/AIA image. Nevertheless, usage of data-driven approaches for tracking CME-related phenomena is currently limited due to the insufficiency of training sets, and having a tool that would prepare for such training sets a more easy task is one of the motivations behind current research. Wavetrack results can be easily converted into annotated training sets for teaching-learning models to recognize various solar features in remote observations.

The paper is organized as follows: We describe the method of Wavetrack in Section 2, followed by an application to several events and their analysis, presented in Section 3. We provide conclusions and a summary in Section 4.

2 Methodology

In this section, we discuss the overall methodology of the Wavetrack framework and the various steps taken in processing the input data. We defer the most technical parts of the discussion to the Appendices AC of the paper.

2.1 À trous wavelet decomposition

In our study, we applied an à trous wavelet decomposition method, in which the data is passed through high and low-pass filters without decimation. This algorithm is more famously known as “algorithme a trous”, which refers to inserting zeros in the filters. It was introduced by Holschneider et al. (1989), and we used an approach similar to the one described in Starck and Murtagh (2002), Chui (1992), and Stenborg and Cobelli (2003). In previous observational studies, it has shown great capability for enhancing solar EUV and white light images, also improving the clarity and intensity of faint features, such as EUV waves (Stenborg & Cobelli, 2003; Stenborg et al., 2008).

Briefly, the algorithm works as follows. At each step of the algorithm, the image is convolved with the Jth iteration of the à trous wavelet kernel, which in its initial form k0 has a form of a 5 × 5 matrix:

(see Appendix A, Eqs. A12 and A13). The result is a wavelet coefficient matrix cj. The wavelet scale for decomposition level j is defined as ωj+1 = cj+1 − cj. The full à trous wavelet calculation scheme for a pre-selected set of j = [1, J] decomposition levels consists of the following steps:

  1. Set the wavelet coefficient c0 equal to the original image;

  2. Calculate each consecutive wavelet coefficient by convolving the previous wavelet coefficient with the corresponding iteration of the à trous wavelet kernel: cj+1 = cj × kj. kj is calculated by dilation of k0 with a rate of 2j;

  3. Calculate the scale of the discrete à trous wavelet transform as the difference between consecutive wavelet coefficients;

  4. Normalize scales and coefficients for further analysis.

We have found the à trous wavelet decomposition to be the most efficient method when it comes to splitting solar images into a series, with each of the space-detail levels highlighted separately. Still, as Wavetrack was developed as a general framework for image recognition and tracking in solar physics, any other similar wavelet technique can be utilized, and the basic J0 kernel can be replaced. Every jth iteration of the kernel is initialized with zeros; positions in the wavelet scale matrix are then filled according to the values of the mother-wavelet J0 kernel matrix, as multiple operations with diluting matrices are less computationally efficient.

2.2 Structures. Object criteria

By structure we mean a connected set of pixels on a single wavelet scale (see “Pathway A” from Fig. 2). The Structure S defines an Object on a scale j if:

(1)

and

(2)

where is the maximum value of the wavelet coefficient for the structure S. If the coefficient does not belong to any structure on the j − 1 scale, then

(3)

Otherwise it holds the maximum value for the structure:

(4)

where all wavelet coefficients on a scale j belong to the structure S on that scale:

(5)

2.3 The Wavetrack software. Concepts and processing routine

The Wavetrack framework performs wavelet decomposition of observational data and automated feature recognition. Intensity thresholding, image posterization, median filtering, image segmentation, and intensity level splitting methods are available. These are defined below. They can be applied to each of the wavelet decomposition levels separately with a different set of parameters. The image can be recomposed from weighted wavelet scales after the application of filtering techniques. The framework follows a modular approach and is built as a set of classes. Within a typical event processing routine, every class represents one or a few image processing techniques, so these classes act as building blocks allowing various decomposition and processing configurations and setups, depending on the input image characteristics and type of objects of interest.

2.3.1 Input data

In this paper, we use the term coronal bright front (Kozarev et al., 2015) to refer to the phenomenon that can be described as a shock wave, shock front, or bright front, defined as the anti-sunward edge of the shock sheath. According to numerous previous studies (Long et al., 2011; Kozarev et al., 2015), the best SDO/AIA channels for observing CBFs are the ones centered on 193Å and 211Å wavelengths. In this work, we use 193Å data. The data is initially obtained via a standard AIA pipeline using SunPy package (The SunPy Community et al., 2020).

Base images are created by averaging a series of images a few minutes prior to the start of the eruption. Each timestep, we start with base difference images which are constructed by subtracting base images from the current RAW image of the sequence. That allows to enhance the change in intensity caused by CBF, omit static details and reduce noise. Running difference or persistence running difference images (Ireland et al., 2019) are used when necessary to determine the moment when the eruption starts.

Unlike the bright active regions, the CBFs observed in EUV are usually very dim features, which exist in a very narrow part of the extremely wide dynamic range of the RAW image data. This fact generally makes these objects harder to recognize and track with standard techniques, particularly on relatively noisy AIA images. To overcome this obstacle, we pre-select the part of the dynamic range where the shock wave is revealed from the base difference image of the sequence.

Wavetrack can also be used to track filaments, and we present some examples further in the article. As they are formed with relatively cool and dense gas suspended above the surface of the Sun, held by magnetic fields, from the point of image processing they are generally less blurry objects with more distinct contours than CBFs; thus, they exist in lower parts of the image dynamic range. So within the typical Wavetrack setup used to track filaments, we use inverted AIA 193Å images and less posterization (defined in Sect. 2.3.2) on coefficients with higher bit rates (>8 bits, see below). Depending on the source data and velocity of the object of interest, Base Difference, Running Difference, or Raw Intensity data may be used for filaments. Wavetrack can also be used to create AWARE-like (Ireland et al., 2019) persistence running difference images.

Obtaining on-disk parts of CBFs and filaments appears to be a more difficult task than for the off-limb parts. Regardless of the class of the event and type of difference images used as an input, on-disk parts of the object usually are surrounded by many low-scale details and noisy backgrounds with high pixel intensities. That leads to significantly different statistical distributions of pixel intensities, requiring different processing and decomposition setups, in base and running difference, as well as original images. For that reason, these two parts of the image often are processed separately, and the results are merged.

2.3.2 Multiscale image filtering

Conceptually, iterative application of the à trous wavelet transform means the decomposition of an image into a few separate levels of detail. Thus, processing of decomposition scales separately appears practical and gives advantages when the shape and size of the objects can be guessed. For any high-bit depth, low-contrast images, segmentation of objects becomes a non-trivial task due to the fact that it is unclear how to define object criteria unambiguously for the group of pixels. The image bit depth determines the number of intensity levels to which an image could be theoretically separated. Posterization is the conversion of a continuous gradation of tone to several regions of fewer tones and happens naturally when image bit depth has been decreased.

It could be assumed that the statistical dispersion of the pixel intensities for the structure (such as a shock wave or the filament) would be lower than for the noise or other surrounding signals, particularly for the disk. In other words, the structure is more “homogeneous” than its surroundings. If a wavelet coefficient reveals the object (and, to a certain extent, already skips the noise and irrelevant low-sized details), reducing bit depth may help further increase visual contrast and reveal shapes and contours. In this way, a gradient-contour operator applied to a reconstructed image (see Appendix B) does not produce spurious contours. We empirically found that reducing wavelet coefficient resolution to 5 bits works best for dim and blurry objects; consequently, this bit depth reduction was used for all events from the current study, where we aimed to locate a CBF on or near the limb.

The scheme in Figure 2 gives a general overview of the image processing stages with the Wavetrack software. For the AIA instrument data, the processing sequence for a single timestep goes as follows:

  1. The Sunpy package is used to process the input data to AIA 1.5 level;

  2. A Base Difference/Running Difference image is obtained;

  3. Certain part of the dynamic range is selected by cutting off the data above and below high and low threshold values from the input parameters;

  4. The image is decomposed with the à trous wavelet transform;

  5. Relative thresholding (sigma units) is applied to each of the wavelet coefficients separately. The pixel intensities are assumed to follow a Gaussian distribution;

  6. Bit depth of the wavelet coefficients is reduced;

  7. Object masks are obtained via Pathway A or Pathway B (see Fig. 2).

    1. In the case of large-scale off-limb objects (coronal shock waves), a simplified processing scheme is usually enough. The object mask is obtained by direct reconstruction of the image from wavelet scales with certain weight coefficients, and one of the higher level scales is applied as a “reference mask”.

    2. In more complex image processing cases, such as, for instance, an early-stage on-disk filament, complete utilization of a multi-scale image decomposition approach would be necessary. Here, wavelet scale values serve as object criteria (described in Sect. 2.1).

  8. Calculate gradient field of the reconstructed image;

  9. Stand-alone masks of every object are obtained via segmentation routine (see Appendix C);

  10. Calculate the center of mass and geometrical center of the object (see Sect. 3.4);

  11. Estimate the object velocity based on at least two consecutive timesteps (see Sect. 3.4).

Points 1–8 describe the general image processing routine with the Wavetrack framework as tested on AIA 193-channel data. Nevertheless, in principle, any other FITS or PNG data can be used as an input. Available image processing techniques can be applied in different sequences at any of the stages (1–8). Joined object masks produced with the multi-scale method can be split into intensity levels for them to be selected manually according to the setup. Median filtering (not used in events from this paper) and posterization can be applied straight at stages 1–2; a one-pass image segmentation technique can be used instead of clustering, etc.

During the late stages of the shock wave propagation, for some events, the pixel values of certain regions of the shock wave (usually in the nose) could be below the sensitivity of the overall computational scheme (see Fig 7, bottom left panel). In that case, several non-connected objects would still be detected by our algorithm, and they would appear on the resulting mask within the final recomposition routine (see Fig 2 (Pathway A) and algorithm description in Sects 2.2 and 2.3.2). Depending on the input data, these parts may be revealed after additional adjustments of the decomposition parameters for the detection of more dim objects.

At the final stage, we estimate the object velocity by calculating the speed of the center of mass of a CBF mask applied to raw data for at least two consecutive time steps. For each object, mask a “center of mass” is calculated based on pixel coordinates with pixel intensities taken as weights. In binary object masks, all pixel weights are equal and calculating coordinates of the “center of mass” in this case would reveal the geometrical center of the object.

2.3.3 Single timestep processing stages and parameters

Here we use three timesteps of the June 07, 2011 AIA-observed CBF event to illustrate the various processing stages of the Wavetrack framework.

Figure 3A shows base difference images obtained by subtracting a base image from each timestep image data and applying higher and lower thresholds for the absolute values of the pixels. A base image is created for each event as an average of a few consecutive timesteps 2–5 min prior to the beginning of the event. Absolute values of the threshold interval (−50, 150) are selected for the purpose of narrowing the image dynamic range and focusing on the image segment where the object of interest is located (i.e., the CBF in this case). In the next step (Fig. 3B), base difference images are decomposed with the à trous wavelet technique into a series of scales (see Fig. 1). To each of the wavelet coefficients, relative thresholding is applied once more depending on the statistical distribution of the pixel intensities for each of the decomposition levels. Finally, after segmentation the object masks at each time step are multiplied by the original data (Fig. 3C) to reveal the intensity of different parts of the object (CBF in this case). Figure 3D shows the masks overlaid on the base difference images for context.

thumbnail Fig. 1

Wavelet coefficients and scales obtained by decomposing a normalized initial image (in this case, an AIA 193-channel base difference) after thresholding and shifting so that the values fit into [0, 1] interval. Left panels: Wavelet coefficients at decomposition levels 0, 2, 4, 6. Right panels: Corresponding wavelet scales at the same decomposition levels.

thumbnail Fig. 2

Image processing stages with Wavetrack software. Base Difference, Running Difference, or RAW image is used as an input. After pre-processing, the image is decomposed with à trous wavelet transform, and objects are labeled depending on the wavelet scales values.

thumbnail Fig. 3

Demonstration of the Wavetrack application stages four times during the event of June 07, 2011. From left to right, the columns show: A) base difference images, B) recomposed object masks, C) object masks applied to the original data, and D) the applied masks overlaid on base difference images.

Here we described methods and algorithms implemented in our software, followed by a single timestep example. The exact processing and decomposition setup and parameters of the filtering techniques depend on the input data and objects of interest (i.e. CME Shock Wave, Filament or an Active Region) and are given in the examples section of the Wavetrack package https://gitlab.com/iahelio/mosaiics/wavetrack.

3 Application to solar eruption observations

3.1 Application to coronal bright fronts

We have applied the Wavetrack methodology to three previously studied eruptive events observed by the AIA telescope (Kozarev et al., 2015, 2017). In the first event on May 11, 2011, an erupting filament drives a relatively bright CBF on the northwest part of the solar disk close to the limb. On June 07, 2011, the second event originated close to the southwest limb and featured a large and bright CBF. On December 12, 2013, the third event originated again close to the southwest limb.

Figure 4 shows the successfully extracted Wavetrack CBF objects for the three events, for four observations separated by 3 min each. The Wavetrack object masks were applied to base difference images and visualized in a blue-green color map over the original AIA 193Å observations in the grayscale colormap to accentuate the CBFs. In all three events, Wavetrack is able to capture very well the extent of the CBF in the consecutive time steps. It easily follows the evolution of the CBFs both on the solar disk and of the limb, even though the pixel distributions and intensities in those two regions are very different. These applications allow us to study in detail the time-dependent shapes of the CBFs, as well as their changing intensity distributions, separate from the rest of the corona. In addition, the Wavetrack method allows for successfully selecting the CBF objects under different coronal activity states. On December 12, 2013, the corona was much more active in this EUV channel than in the previous two events, but this does not preclude the successful segmentation of the CBF. Finally, the segmentation of the CBF to the last event demonstrates that Wavetrack is capable of tracking multiple separate parts of the same feature.

thumbnail Fig. 4

The Output of the Wavetrack method for three separate CBF events: May 11, 2011; June 07, 2011; December 12, 2013. For each event, we show four snapshots of inverted-colormap grayscale AIA 193Å images, with the Wavetrack CBF masks applied to base difference data overlaid using a different colormap to highlight them.

3.2 Application to eruptive filaments

Wavetrack may be used to isolate different observational features in the same eruptive event, allowing to study their spatial and temporal relationships. This is accomplished by executing a separate run of the algorithm with appropriate input parameters for tracking filament material. Figure 5 shows three-time steps from the May 11, 2011 event, for which both the erupting filament (in orange-red) and the CBF it drives (in blue-green) have been segmented from AIA 193Å observations and overlaid on the original AIA data. This allows following the relationship between the driver and wave, demonstrating the versatility of Wavetrack. Figure 6 shows the application of Wavetrack to a large-scale filament in a slow eruption on September 29, 2013. The segmented filament feature is overlaid on the background solar corona. The three frames are taken at a 10-minute cadence. The on-disk feature is consistently tracked during this slow eruption.

thumbnail Fig. 5

Combined tracking of May 11, 2011, CBF and the filament driving it, with Wavetrack is shown here for three snapshots of the event, separated by approximately 2 min each.

thumbnail Fig. 6

Tracking of September 29, 2013, large-scale eruptive filament with Wavetrack is shown here for three snapshots early in the event, separated by 10 min each.

The presented applications of Wavetrack to solar eruptive events demonstrate the versatility of Wavetrack – it recognizes and tracks the desired features on the solar disk and off the solar limb. These features of Wavetrack make it perfect for inclusion in pipelines aimed at faster processing and targeted analysis with additional models such as differential emission measure (DEM) and Fourier local correlation tracking (FLCT). Below, we extend the analysis by examining the plane-of-sky velocities of the studied events.

3.3 Estimation of plane-of-sky velocities

We further apply the Wavetrack code to isolating and studying the kinematics of CBFs and filaments in detail. To do that, we employ the Fourier Local Correlation Tracking method (Welsch et al., 2004; Fisher & Welsch, 2008), used extensively for the estimation of horizontal flows in photospheric magnetograms. The method was initially proposed as a cross-correlation technique for the precise measurement of the proper motion of traces seen on successive images of time series of solar granulation (November & Simon, 1988). The aim of FLCT calculation for two consecutive 2D maps of some scalar quality is to find a 2D flow field that, when applied to the scalar field in first image will most closely resemble the seconnd image. To construct a 2D velocity field that connects two images I1(xy) and I2(xy) taken at two different times t1 and t2, the algorithm starts from some given location within both images, the velocity vector is calculated, and then this step is repeated varying the location overall pixel positions. When applied to the object masks output by the Wavetrack code, it produces detailed maps of the instantaneous velocity of the object of interest. In this application, we use a freely available Python implementation of the method as a Sunpy-affiliated package2.

Figure 7 shows four pairs of consecutive AIA images (one pair per row) from the May 11, 2011 event, used as input for the FLCT algorithm. The images in each pair are separated by 24 s. The pairs in the figure are 2 min apart from each other in order to follow the CBF progression over 6 min. The CBF change is difficult to observe in each pair but is significant between pairs.

thumbnail Fig. 7

The wavetrack output for four pairs of consecutive images separated by 2 min during the eruption of May 11, 2011, with the applied CBF mask, similar to Figure 4.

The corresponding FLCT results are shown in Figure 8. Each row corresponds to a row in Figure 7. The left columns show the instantaneous plane-of-sky direction of motion of each region of the CBF using red arrows. The arrows’ lengths are scaled with the local speed. We have removed spurious vectors with speeds larger than 1500 km/s. In the right columns, the instantaneous speed map is shown for the same four times. The color map is scaled between 0 and 600 km/s from black to white. The results show clearly how the CBF is expanding from the central source in an uneven fashion. What is more, in this event, the thinnest part of the CBF corresponding to the region strongly driven by the erupting filament is also the fastest moving as the event processes – in contrast with the other regions both on disk and off limb. This cannot be inferred from intensity observations alone, but it is easy to observe in these results.

thumbnail Fig. 8

The output of the FLCT model for the four pairs of consecutive images is shown in Figure 7. Left panels: Arrows showing the plane-of-sky velocity. Right panels: The plane-of-sky speed.

For the event on June 07, 2011, the Wavetrack pairs are shown in Figure 9. Again, every two consecutive images are separated by 24 s, while the pairs are separated by 2 min. The FLCT results are shown in Figure 10. Here, a clear increase in the average speed can be seen as the event progresses – first in the leading edges moving away from the Sun and later in the on-disk front moving towards the center of the solar disk.

thumbnail Fig. 9

The wavetrack output for four pairs of consecutive images separated by 2 min during the eruption of June 07, 2011, similar to Figure 4.

thumbnail Fig. 10

The output of the FLCT model for the June 07, 2011 event, for the four pairs of consecutive images is shown in Figure 9. Left panels: Arrows showing the plane-of-sky velocity. Right panels: The plane-of-sky speed.

Finally, the Wavetrack pairs for December 12, 2013, event are shown in Figure 11. They are again separated by 2 min each, while consecutive images are separated by 24 s. Here the CBF quickly becomes split into separate sub-features, likely due to the interactions of its various parts with the much more dynamic underlying coronal environment. The active magnetic environment surrounding the eruption source is easily seen. This may be the cause of the slower speeds observed here, shown in Figure 12. Similar to the May 11, 2011 event, the highest speeds are observed consistently in the direction away from the Sun, above the erupting filament driver. Again, the continued compression by the driving filament has caused a thinner wavefront region, as well as higher speeds there.

thumbnail Fig. 11

The wavetrack output for four pairs of consecutive images separated by 2 min during the eruption of December 12, 2013, similar to Figure 4.

thumbnail Fig. 12

The output of the FLCT model for the December 12, 2013 event, for the four pairs of consecutive images is shown in Figure 11. Left panels: Arrows showing the plane-of-sky velocity. Right panels: The plane-of-sky speed.

We explore this effect further in the May 11, 2011 event, where we have also calculated the plane-of-sky velocities of the erupting filament driver. Figure 13 shows overlaid three-time steps of the event with the velocity vectors on the left panels for the CBF (in red) and the filament (in blue); the right panels show the speed maps for the CBF as in Figure 8 (in the yellow-red-black color map), and the filament (in the turquoise-blue-purple color map). Our method allows demonstrating that the region of highest speeds in the rising filament (darker blue pixels in the right plots) is directly below the region of highest speeds in the CBF (brighter yellow pixels).

thumbnail Fig. 13

The output of the FLCT models for the combined CBF/filament May 11, 2011 event, for the three images is shown in Figure 5. Left panels: Arrows showing the plane-of-sky velocity vectors (red color for the CBF object, blue for the filament). Right panels: The plane-of-sky speed.

3.4 Kinematics of CBF centers of mass/geometric centers

Another aspect of the Wavetrack method that can be used is the determination of instantaneous positions and speeds of the feature centers. The method provides as output the geometric centers (GC) and the centers of mass (CM) for the three events studied here. The GC is defined as the geometric center of all Wavetrack mask pixels at a given time, while the CM also includes a weighting based on the AIA 193Å channel’s base difference intensity. The GC may be used to determine whether the CBF center is moving, which is useful for geometric modeling of compressive fronts in events. On the other hand, the instantaneous difference between the GC and CM positions is due to an activation manifested by an increase in the intensity or emitting area of certain regions of the CBF. This may further be explored by calculating the angle between these two points over time to provide a measure of the relative activity as a function of position along with the CBF.

We provide, for the three events studied here, GC and CM metrics over time. Figure 14 shows the results for the May 11, 2011 event. The panels show, from top to bottom: the X-axis (GCX and CMX) and Y-axis (GCY and CMY) positions, measured from the projected solar center in solar radii (Rsun); the radial distances GCR and CMR, in Rsun; the distance between the two radial positions δr = GCR − CMR, in km; the radial speeds of the two points, and , in km/s; and the angle between the two position vectors GC and CM over time, θGC−CM, in degrees. A clear split between the two radial positions appears and increases after 02:26 UT. This is mostly due to a shift to the south of the CM. The evolution of the angle θGC−CM supports this change. The reason for this is the gradual increase in the thickness of the wave in the southwest of the front. At the same time, the speed of the GC and CM points varies only slightly, from 0 to 100 km/s.

thumbnail Fig. 14

Kinematics of the center of mass and geometric centers for the event on May 11, 2011. The rows show GC and CM X-, Y-, and R-positions, the magnitude of the distance between the GC and CM in km, and the angle between the two points as a function of time.

For the dynamic event on June 07, 2011, the situation is different, as can be seen in Figure 15. In that event, the GC and CM moved with speeds of up to 300 km/s in the radial direction. The angle between them changed quite a lot as different parts of the CBF brightened and dimmed. We note that in this case, as well as with the following event, the significant decrease of the projected radial position of the GC and CM is mostly due to the north-west parts of the CBF dimming and disappearing from the Wavetrack feature. However, in the same period the angle θ between the two remains robust, changing only slightly.

thumbnail Fig. 15

Kinematics of the center of mass and geometric centers for the June 07, 2011 event. The panels are the same as in Figure 14.

thumbnail Fig. 16

Kinematics of the center of mass and geometric centers for the December 12, 2013 event. The panels are the same as in Figure 14.

Finally, in the event on December 12, 2013, we observed smoothly varying positions of the GC and CM and relatively slow speeds under 100 km/s. The increase of the radial position after 03:17 UT we attribute to the decrease in brightness of the southern part of the CBF. The angle θ is decreasing for most of the event, pointing increasingly southward, in line with the observed feature evolution.

4 Summary and conclusions

In this work, we have presented Wavetrack, a new method for the automated detection and tracking of dynamic coronal features. The method utilizes wavelet decomposition, feature enhancement and filtering, and final object recomposition. Wavetrack produces time-dependent feature pixel masks, which can be applied to integral or base-difference images to produce final feature maps. It is capable of tracking the pixels related to very dim, large-scale features like coronal bright fronts/EUV waves in AIA observations. It has also been applied successfully to eruptive filaments in these data. Wavetrack works for both on-disk and off-limb features, and it is capable of tracking well features that split into separate parts over time. We have implemented it as a flexible, object-oriented framework written in Python, and it is freely available for download and use.

We have applied Wavetrack to four separate events, of which we focus on three – the CBFs on May 11 and June 07, 2011, and on December 12, 2013. For all three, the method tracks the full CBF pixel maps. The model results have been used as input to the FLCT method of calculating plane-of-sky speeds. Combined with the Wavetrack results, its application readily reveals the evolution of driven and non-driven regions of the CBFs, as well as their relation to eruptive filament drivers. We find that drivers cause a compression effect thinning the CBF and increasing its speed, as expected from models. However, CBF brightness does not necessarily increase significantly in the driven regions in AIA data.

Finally, the method allows for tracking the changes over time as a function of the feature regions by calculating the time-dependent vector between the pixel geometric center and the center of mass, computed by weighing in the observed pixel intensities. This is especially useful for large-scale features such as CBFs, as it provides a simple metric with a one-dimensional time series for characterizing the feature evolution.

Currently, the model is somewhat limited in its application by the necessity of human input in order to be used for the segmentation of specific types of features, especially dim ones. Several object criteria, such as the expected threshold intervals and the weight coefficients of the recomposition, must be set manually. When base difference imaging is used, the base image must be chosen relatively precisely so as not to pollute the input data with spurious features. Finally, in some cases, it is necessary to fine-tune parameters for a specific event by a trial and error method. These limitations will be addressed in future work.

Our method is widely applicable both to different types of solar dynamic features and to different observational data. In future work, we intend to apply it to detailed studies of filament evolution, as well as to coronagraph data, with the goal of improving our understanding of how large-scale eruptive fronts change between different types of observations, as well as between the low and middle corona.

Acknowledgments

This work is part of the MOSAIICS project, funded under contract #KP-06-DV-8/18.12.2019 to the Institute of Astronomy and NAO, BAS, under the National Scientific Programme “VIHREN” in Bulgaria. The AIA data is courtesy of NASA/SDO and the AIA, EVE, and HMI science teams. The editor thanks two anonymous reviewers for their assistance in evaluating this paper.

Appendix A

Multi-scale data representation approach and a wavelet transform

Generally, the multi-scale image analysis/decomposition concept is based on the idea that every Hilbert space L2(R) could be presented through the hierarchy of orthogonal sub-spaces Vm while in closure, one can assume that L2(R) can be represented in the following way:

(A1)

Here, for simplicity, we considered the case of a 1d signal. For every function which belongs to Vm, there is a corresponding scaled function, so that:

(A2)

and there exists a function

(A3)

so that its negative increment

(A4)

would form an orthogonal basis of the space, which means that functions

(A5)

form an orthogonal basis of the space Vm. Every function f(x) ∈ L2(R) could be presented as a set of approximations fm(x) ∈ Vw, i.e.

(A6)

Thus, the function fm(x) is an orthogonal projection of function f(x) on space Vm

(A7)

as,

(A8)

and scaling equation is written as

(A9)

Now, define another object, wavelet spaces W, so that L2(R) can be decomposed as

(A10)

where every function f(x) which belongs to L2(R) has its unique decomposition

(A11)

where gj ∈ Wj. A wavelet function that satisfies the scaling condition (A9) is

(A12)

And wavelet coefficients are calculated as

Each wavelet decomposition level could be represented by subtracting the data on j and j + 1 levels:

(A13)

The difference C0,1 − C1,1 represents difference in scales, for wavelet decomposition φ(x), while corresponding wavelet function is defined as

(A14)

The 1-d signal is obtained the following way:

(A15)

with {h(k)} calculated from the scaling function φ(x)

(A16)

In case of the à trous wavelet, f(x) takes form of a B3 spline

(A17)

Appendix B

Image gradient methods

The gradient field of the image is the result of applying the gradient operator to an image. In this case, the gradient field of the final object image (after the object mask is applied to the original image) is used for visual interpretation but can also serve as additional criteria for more robust object segmentation, as it highlights object contours, allows to determine saddle point locations and to calculate gradient vectors. Image gradients can also be used as part of training sets for deep convolutional networks. By default, we use the Sobel-Feldman operator, a discrete differentiation operator, to compute an approximation of the gradient of the image intensity function. At each pixel of the image, the result would be either the corresponding gradient vector or the norm of this vector. The magnitude

(B1)

and direction

(B2)

of the vector are obtained as a product of separate vertical and horizontal convolutions of the image A with 3 × 3 kernel matrices:

(B3)

(B4)

As the result of the operator being the product of two separate vertical and horizontal operations, it is relatively inexpensive in terms of computation and can be implemented in parallel threads. As an alternative, for less complex images, the gradient is calculated by simple convolution of the image with the matrix

(B5)

The curvature χ of the intensity at pixel coordinates (x, y) (Hagenaar et al., 1999) in a certain direction (hi, hj) is given by

(B6)

where

(B7)

Around a local maximum, the curvature must be positive in all directions. Initially, this approach was used for the analysis of fluxes of concentrations on magnetograms. A similar routine was also implemented into the Wavetrack package and showed its efficiency in determining contours on AIA EUV images, as the gradient approximation given by the Sobel-Feldman operator is relatively rough, in particular for high-frequency variations in the image.

Appendix C

Image segmentation

Image segmentation is the classification of an image into different groups. In the current study, it is done in two passes with a sliding window. At each iteration, non-zero intensity pixels are considered to belong to a new object with separately calculated gradient field values for the same coordinates used as additional criteria. Pixels considered to belong to an object are assigned with a new number. Eventually, each object mask is placed on its own separate image. Ideally, if configuration parameters are precise enough, only one object mask would be in the output. It would correspond to the shock wave, filament, etc. The rest of the pixel groups would form objects either too small or would not pass previous filtering or decomposition-recomposition stages. In the case of a somewhat less perfect setup, manual verification and selection of relevant objects from the output would be necessary.

Initial estimation of the objects count and locations is done via a Monte-Carlo algorithm with the window size r,

(C1)

where Nmin is the minimum number of pixels in an object. Pixels of the posterized image (see definition of posterization in Sect. 2.3.3) are selected randomly, each at least r pixels apart from the previous selection. Pixels with different intensity values and zero gradient field values are accepted as belonging to separate objects (Metropolis, 1949).

References

Cite this article as: Stepanyuk O, Kozarev K & Nedal M 2022. Multi-scale image preprocessing and feature tracking for remote CME characterization. J. Space Weather Space Clim. 12, 20. https://doi.org/10.1051/swsc/2022020.


All Figures

thumbnail Fig. 1

Wavelet coefficients and scales obtained by decomposing a normalized initial image (in this case, an AIA 193-channel base difference) after thresholding and shifting so that the values fit into [0, 1] interval. Left panels: Wavelet coefficients at decomposition levels 0, 2, 4, 6. Right panels: Corresponding wavelet scales at the same decomposition levels.

In the text
thumbnail Fig. 2

Image processing stages with Wavetrack software. Base Difference, Running Difference, or RAW image is used as an input. After pre-processing, the image is decomposed with à trous wavelet transform, and objects are labeled depending on the wavelet scales values.

In the text
thumbnail Fig. 3

Demonstration of the Wavetrack application stages four times during the event of June 07, 2011. From left to right, the columns show: A) base difference images, B) recomposed object masks, C) object masks applied to the original data, and D) the applied masks overlaid on base difference images.

In the text
thumbnail Fig. 4

The Output of the Wavetrack method for three separate CBF events: May 11, 2011; June 07, 2011; December 12, 2013. For each event, we show four snapshots of inverted-colormap grayscale AIA 193Å images, with the Wavetrack CBF masks applied to base difference data overlaid using a different colormap to highlight them.

In the text
thumbnail Fig. 5

Combined tracking of May 11, 2011, CBF and the filament driving it, with Wavetrack is shown here for three snapshots of the event, separated by approximately 2 min each.

In the text
thumbnail Fig. 6

Tracking of September 29, 2013, large-scale eruptive filament with Wavetrack is shown here for three snapshots early in the event, separated by 10 min each.

In the text
thumbnail Fig. 7

The wavetrack output for four pairs of consecutive images separated by 2 min during the eruption of May 11, 2011, with the applied CBF mask, similar to Figure 4.

In the text
thumbnail Fig. 8

The output of the FLCT model for the four pairs of consecutive images is shown in Figure 7. Left panels: Arrows showing the plane-of-sky velocity. Right panels: The plane-of-sky speed.

In the text
thumbnail Fig. 9

The wavetrack output for four pairs of consecutive images separated by 2 min during the eruption of June 07, 2011, similar to Figure 4.

In the text
thumbnail Fig. 10

The output of the FLCT model for the June 07, 2011 event, for the four pairs of consecutive images is shown in Figure 9. Left panels: Arrows showing the plane-of-sky velocity. Right panels: The plane-of-sky speed.

In the text
thumbnail Fig. 11

The wavetrack output for four pairs of consecutive images separated by 2 min during the eruption of December 12, 2013, similar to Figure 4.

In the text
thumbnail Fig. 12

The output of the FLCT model for the December 12, 2013 event, for the four pairs of consecutive images is shown in Figure 11. Left panels: Arrows showing the plane-of-sky velocity. Right panels: The plane-of-sky speed.

In the text
thumbnail Fig. 13

The output of the FLCT models for the combined CBF/filament May 11, 2011 event, for the three images is shown in Figure 5. Left panels: Arrows showing the plane-of-sky velocity vectors (red color for the CBF object, blue for the filament). Right panels: The plane-of-sky speed.

In the text
thumbnail Fig. 14

Kinematics of the center of mass and geometric centers for the event on May 11, 2011. The rows show GC and CM X-, Y-, and R-positions, the magnitude of the distance between the GC and CM in km, and the angle between the two points as a function of time.

In the text
thumbnail Fig. 15

Kinematics of the center of mass and geometric centers for the June 07, 2011 event. The panels are the same as in Figure 14.

In the text
thumbnail Fig. 16

Kinematics of the center of mass and geometric centers for the December 12, 2013 event. The panels are the same as in Figure 14.

In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.