User Guide¶
This notebook will show an example of how the PolarGeosAI package can be used to extract GOES images from scatterometer pixel information. The first step is to download the scatterometer data. This can be done using links provided in the documentation. Once the download is completed, the scatterometer data can be imported :
import numpy as np
import matplotlib.pyplot as plt
import xarray as xr
import PolarGeosAI as pga
Reloaded (debug mode)
# importing the data
polar_data = xr.open_dataset(
'../../../GOES_ML_DOLDRUM/Branch-updatefilesystem/Github-GOES-ML/data/Scatterometers/cmems_obs-wind_glo_phy_nrt_l3-hy2c-hscat-asc-0.25deg_P1D-i-2020_2024.nc'
)
Once the scatterometer file (Xarray) is imported, the extract_scatter function needs to be run to extract every valid pixel in the file into numpy arrays that contain the lat, lon, measurement time and main variable that is desired.
# Set the start and end datetime and the lat / lon ranges wanted for the extraction
start_datetime = "2022-01-01 00:00:00"
end_datetime = "2022-01-01 12:00:00"
lat_range = [-90, 90]
lon_range = [-180, 180]
observation_times, observation_lats, observation_lons, observation_windspeeds = pga.extract_scatter(
polar_data=polar_data,
start_datetime=start_datetime,
end_datetime=end_datetime,
lat_range=lat_range,
lon_range=lon_range,
main_variable="wind_speed",
)
Extracting scatter data: 100%|██████████| 20423/20423 [00:00<00:00, 832696.00it/s]
Scatterometer data extracted
With the scatterometer information in the required format, we can use the extract_goes function to extract GOES images corresponding to these scatterometer pixels.
# Which channels to extract from GOES (can be a list of multiple channels ex : ["C01", "C02", "etc."]).
# Known bug : C02 is not working for now. All other channels are working.
channels = ["C01"]
# use the function to extract all images corresponding to the observation data from the scatterometer
images = pga.extract_goes(
observation_times,
observation_lats,
observation_lons,
channels,
polar_data,
)
# Check if all the arrays have the same shape
print(np.shape(images))
print(np.shape(observation_times))
print(np.shape(observation_lats))
print(np.shape(observation_lons))
INFO :No file found for C01 on day 2022/001/15 for minute 57, skipping file INFO :No file found for C01 on day 2022/001/15 for minute 58, skipping file
Retrieving and processing GOES data: 0%| | 0/11 [00:00<?, ?it/s]Retrieving and processing GOES data: 100%|██████████| 11/11 [05:35<00:00, 30.46s/it]
(20423, 1, 30, 30) (20423,) (20423,) (20423,)
The following function will package the data into two arrays for model input : 1. Images 2. numerical data. This function will also filter the images files that are empty and replace nan values with 0's. If specified, the function will also transform the lat/lon/time information into solar zenith and solar azimuth angles
# Package the data to be used in the model
images_packaged, numerical_data_packaged = pga.package_data(
images, observation_lats, observation_lons, observation_times,observation_windspeeds,filter=True, solar_conversion=True)
# Check if the data is correctly packaged
print(np.shape(images_packaged), np.shape(numerical_data_packaged))
Filtered invalid images Filled nans converted to solar angles (sza, saa) returning images, numerical_data (18148, 1, 30, 30) (3, 18148)
One all the extraction and processing steps are done, we can save the data with the save_data function that will correctly name the file as well as create a single .npz file for this set.
# save the data as a single .npz file
pga.save_data(images_packaged, numerical_data_packaged, polar_data, start_datetime, end_datetime, channels)
Data saved to ./output_processed_data/processed_HSCAT-L3-25km_['C01']_2022-01-01 00:00:00_2022-01-01 12:00:00.npz
Visually inspect the images with the numerical data associated
# load the data
sza = numerical_data_packaged[0]
saa = numerical_data_packaged[1]
wind_speed = numerical_data_packaged[2]
# Generate 400 random indices
random_indices = np.random.choice(len(sza), 400, replace=False)
vmin = np.min(images_packaged)
vmax = np.quantile(images_packaged, 0.95) # adjust the quantile value to change the contrast of the images
fig, axes = plt.subplots(20, 20, figsize=(20, 20),dpi = 200)
for i, idx in enumerate(random_indices):
row = i // 20
col = i % 20
# Display the image
axes[row, col].imshow(images_packaged[idx][0],vmin=vmin, vmax=vmax, cmap='viridis')
axes[row, col].axis('off')
# Adding text annotations
sza_round = round(sza[idx], 2)
saa_round = round(saa[idx], 2)
wind_speed_round = round(wind_speed[idx], 2)
text_str = f'SZA: {sza_round:.2f}\nSAA: {saa_round:.2f}\nWind: {wind_speed_round:.2f}'
axes[row, col].text(0.5, -0.15, text_str, transform=axes[row, col].transAxes, fontsize=6,
verticalalignment='bottom', horizontalalignment='center', bbox=dict(boxstyle='round,pad=0.3', edgecolor='black', facecolor='white'))
plt.subplots_adjust(wspace=0.01, hspace=0.1) # Adjust the spacing between subplots
plt.show()
Batch extraction and processing¶
We can batch extract and process file by defining a function. Here for example, we extract files for the same time range and lat / lon range for channel from 1 to 13 (C02 excl.)
def full_workflow_per_channel(channels):
# Import the scatterometer
polar_data = xr.open_dataset(
'../polar_data/cmems_obs-wind_glo_phy_nrt_l3-hy2c-hscat-asc-0.25deg_P1D-i-2020_2024.nc'
)
# Set the start and end datetime (should be the same for all channels in this experiment)
start_datetime = "2022-01-01"
end_datetime = "2022-01-01"
# Set the channels to extract
channels = channels
# Extract the scatterometer data
observation_times, observation_lats, observation_lons,observation_windspeed = pga.extract_scatter(
polar_data=polar_data,
start_datetime=start_datetime,
end_datetime=end_datetime,
lat_range=[-90, 90],
lon_range=[-180, 180],
main_variable="wind_speed",
)
# Extract the GOES data from the scatterometer obersvation data
images = pga.extract_goes(
observation_times,
observation_lats,
observation_lons,
channels,
polar_data,
)
print(np.shape(images))
print(np.shape(observation_times))
print(np.shape(observation_lats))
print(np.shape(observation_lons))
images, numerical_data = pga.package_data(images, observation_lats, observation_lons, observation_times, observation_windspeed,filter=True, solar_conversion=True)
print(np.shape(images), np.shape(numerical_data))
# Save the data
pga.save_data(images, numerical_data, polar_data, start_datetime, end_datetime, channels)
# run these functions to extract all channel information. Each channel will be saved as a separate .npz file
#full_workflow_per_channel(['C01'])
#full_workflow_per_channel(['C03'])
#full_workflow_per_channel(['C04'])
#full_workflow_per_channel(['C05'])
#full_workflow_per_channel(['C06'])
#full_workflow_per_channel(['C07'])
#full_workflow_per_channel(['C08'])
#full_workflow_per_channel(['C09'])
#full_workflow_per_channel(['C10'])
#full_workflow_per_channel(['C11'])
#full_workflow_per_channel(['C12'])
#full_workflow_per_channel(['C13'])
Training a model with the prepared dataset¶
The model used in this demo is a multi input model with a MLP on one side and a simple CNN on the other.
channel = "C01"
file = f"./output_processed_data/processed_HSCAT-L3-25km___{channel}___2022-01-01_00_00_00_2022-01-01_12_00_00.npz"
score = pga.train_multi_input_model(file)
Folder created at ../results_folder/2025_01_09-11_03_38_processed_HSCAT-L3-25km_['C01']_2022-01-01_2022-01-01
running on cuda
torch.Size([512, 2, 30, 30])
torch.Size([512, 2])
torch.Size([512])
example target: tensor([ 8.7900, 5.8800, 4.0700, 7.2600, 4.9700, 5.4300, 7.0400, 4.7000,
6.8700, 7.0800, 4.0900, 7.4700, 5.6200, 4.6300, 7.3500, 11.7900,
7.2500, 7.6600, 8.7800, 5.9200, 6.0200, 6.3800, 6.1800, 6.2200,
4.3800, 6.5800, 5.2800, 5.8700, 6.9000, 6.9600, 6.4800, 6.1800,
7.5500, 7.3600, 11.0900, 7.6500, 7.4100, 4.6900, 3.9300, 6.5800,
6.3500, 4.4300, 1.9300, 7.4200, 7.8500, 5.3200, 6.5900, 6.8000,
5.2000, 7.2700, 8.4200, 8.6300, 4.3000, 5.0600, 6.4000, 7.9600,
6.4700, 5.9200, 8.4300, 6.2900, 5.5600, 4.9700, 4.5600, 2.7100,
3.4400, 4.7600, 7.2500, 7.9500, 7.7400, 7.3900, 6.9300, 4.3200,
7.9700, 8.6500, 11.6000, 11.5100, 7.4600, 7.2700, 5.9000, 8.2900,
2.2300, 6.4400, 7.6100, 8.2500, 5.7300, 7.6300, 6.6800, 5.1900,
6.2900, 5.7800, 6.6400, 12.4300, 6.3800, 7.5900, 8.6600, 6.3400,
5.4900, 7.5900, 9.3900, 6.7600, 2.0000, 7.5500, 0.5300, 4.6300,
6.6000, 6.7800, 7.8700, 7.6200, 4.8800, 8.3100, 6.0200, 7.6600,
8.1400, 8.4500, 4.8100, 8.0800, 7.7500, 5.9000, 6.3400, 3.0600,
6.2800, 5.9100, 7.1100, 6.6900, 6.5900, 5.4100, 4.9600, 7.3400,
5.0200, 7.7400, 7.5300, 4.9700, 7.4700, 7.9400, 5.9100, 2.7700,
7.2400, 6.3300, 5.0300, 6.6300, 1.3000, 5.9500, 6.4800, 6.0900,
3.8200, 3.9000, 8.4900, 5.2200, 5.0700, 6.5700, 4.8700, 3.6300,
7.0900, 6.3500, 8.0400, 9.2300, 5.4000, 4.5500, 7.7100, 6.8600,
6.1800, 6.6900, 4.8600, 6.7600, 7.7300, 5.8400, 8.7400, 5.3300,
6.2400, 6.9800, 6.4000, 4.6600, 4.2700, 8.5200, 5.1100, 7.7700,
3.7200, 7.4000, 6.1300, 7.0100, 7.2100, 7.2800, 6.6900, 4.2100,
7.5200, 6.3900, 4.5200, 8.3700, 2.0400, 7.8500, 4.8500, 6.6000,
6.9200, 8.4600, 7.1200, 6.3500, 7.3000, 0.9600, 5.4400, 5.5500,
6.3400, 7.0100, 1.6100, 6.5400, 7.1800, 7.0800, 6.5000, 11.3100,
7.5800, 6.3500, 7.8100, 6.0500, 7.4400, 5.1200, 8.1100, 6.0000,
5.3300, 5.3900, 7.1300, 8.0700, 7.4200, 6.3700, 6.4800, 8.4500,
6.4000, 5.9600, 5.7200, 7.5400, 8.3100, 8.2000, 4.8900, 6.7800,
7.5400, 5.4900, 7.6400, 3.8400, 6.8700, 7.0300, 5.9100, 7.7600,
8.0200, 6.6700, 8.0100, 8.8100, 7.6800, 6.5500, 8.5100, 3.2700,
5.2600, 7.9800, 6.7600, 6.1900, 7.3500, 2.8500, 4.1900, 7.2500,
7.0400, 7.5500, 6.9300, 4.8100, 6.3900, 6.0800, 8.9000, 6.8800,
2.6000, 4.8500, 7.0600, 6.0600, 6.0400, 7.8700, 3.8300, 7.8400,
5.2900, 7.0700, 6.6400, 12.3500, 7.3800, 7.7400, 6.9200, 7.3500,
8.0600, 7.6600, 7.7500, 7.7700, 1.2700, 7.6900, 6.4200, 8.8900,
8.0200, 5.2700, 6.2300, 12.6800, 6.7100, 8.1200, 6.8600, 8.2100,
6.8300, 5.5100, 10.4600, 6.4000, 6.5500, 5.7500, 7.4700, 4.9900,
6.9000, 7.4000, 6.5100, 7.0800, 5.3200, 7.1400, 10.4500, 6.2000,
7.1200, 4.3000, 8.7300, 3.3800, 6.3000, 6.8800, 7.2100, 3.8200,
7.2300, 8.3700, 5.9900, 6.1400, 8.2600, 7.2600, 6.3300, 6.8700,
7.0300, 6.4600, 4.3600, 7.2000, 5.3400, 6.9100, 5.7800, 9.9600,
6.6300, 7.0100, 6.9600, 7.5600, 6.0100, 5.9600, 8.1900, 7.7700,
6.9500, 4.3400, 6.2300, 6.0700, 5.3100, 7.6800, 7.7300, 7.5100,
7.7200, 5.3700, 8.0100, 2.8200, 5.9500, 7.5300, 3.4300, 8.0200,
7.1000, 1.4600, 7.5300, 6.9200, 4.8500, 4.6800, 7.4400, 6.1900,
8.3800, 5.6100, 7.3300, 6.8000, 1.5500, 7.9900, 7.3700, 6.8600,
6.9000, 6.6400, 3.9500, 6.3100, 5.0400, 8.1600, 4.8200, 8.0200,
6.6500, 10.3200, 7.2200, 5.0500, 7.1100, 5.1100, 6.4500, 5.8100,
7.5900, 7.7800, 8.6900, 8.4000, 7.6900, 6.9100, 4.2500, 6.8100,
7.9800, 6.7800, 3.4600, 7.2800, 5.8900, 1.0200, 7.2800, 5.7700,
5.8500, 7.1800, 3.9600, 3.6900, 3.4200, 7.1300, 3.7000, 3.7700,
5.4400, 3.9900, 5.7100, 6.7100, 5.8900, 6.8400, 8.0000, 5.4400,
4.7400, 7.3200, 3.2500, 6.6400, 7.0900, 6.2300, 6.4800, 5.3300,
7.9500, 6.5900, 7.3000, 7.3600, 5.6200, 5.3600, 7.2100, 7.4200,
7.6000, 7.4800, 7.7700, 7.8700, 8.3800, 6.7500, 8.0400, 6.9600,
2.8200, 7.1000, 6.3500, 7.8100, 6.2500, 6.7800, 5.7600, 5.6200,
9.2100, 4.7200, 6.4200, 6.1000, 7.0400, 8.0700, 7.9700, 5.9300,
6.3500, 7.0500, 6.9600, 8.1400, 7.3600, 6.1400, 5.5100, 8.1300,
7.6200, 8.2100, 7.4200, 7.7500, 8.1000, 8.3300, 5.0800, 8.3500,
7.5600, 6.3200, 5.7100, 8.6900, 6.9400, 6.1900, 0.6400, 5.7400,
7.0900, 7.7500, 8.2900, 6.8200, 7.2700, 8.1900, 4.2900, 7.9000,
3.2700, 2.4900, 6.3400, 7.3100, 4.7100, 8.3500, 7.3800, 7.8000,
6.3800, 4.6400, 7.2000, 6.7000, 3.6300, 6.9500, 5.5300, 9.0600])
example numerical input: tensor([[224.2550, 51.3416],
[224.6015, 53.6839],
[217.0907, 35.9263],
...,
[236.7490, 34.0473],
[225.6936, 38.0162],
[213.9052, 45.0415]])
Data loaded !
Start of training !
Epoch 1, Training Loss: 23.908505481222402
Epoch 1, Validation Loss: 10.597583611806234
Epoch 2, Training Loss: 11.56543955595597
Epoch 2, Validation Loss: 7.9001305103302
Epoch 3, Training Loss: 8.958913595780082
Epoch 3, Validation Loss: 6.944924036661784
Epoch 4, Training Loss: 7.920114040374756
Epoch 4, Validation Loss: 6.558149894078572
Epoch 5, Training Loss: 7.103043307428774
Epoch 5, Validation Loss: 7.077345530192058
Epoch 6, Training Loss: 6.685125724129055
Epoch 6, Validation Loss: 6.188921848932902
Epoch 7, Training Loss: 6.528018536775009
Epoch 7, Validation Loss: 6.741213162740071
Epoch 8, Training Loss: 6.306564538375191
Epoch 8, Validation Loss: 5.522763013839722
Epoch 9, Training Loss: 6.111704287321671
Epoch 9, Validation Loss: 5.232315142949422
Epoch 10, Training Loss: 6.146622782168181
Epoch 10, Validation Loss: 6.151610294977824
Epoch 11, Training Loss: 6.09220842693163
Epoch 11, Validation Loss: 5.320507923762004
Epoch 12, Training Loss: 6.003611398779827
Epoch 12, Validation Loss: 5.884164651234944
Epoch 13, Training Loss: 6.0758692285288936
Epoch 13, Validation Loss: 5.644556840260823
Epoch 14, Training Loss: 5.952847874682883
Epoch 14, Validation Loss: 5.81475567817688
Epoch 15, Training Loss: 5.922181481900423
Epoch 15, Validation Loss: 6.300251722335815
Epoch 16, Training Loss: 5.8418106410814366
Epoch 16, Validation Loss: 5.440755049387614
Epoch 17, Training Loss: 5.807459831237793
Epoch 17, Validation Loss: 6.139695405960083
Epoch 18, Training Loss: 5.909211469733196
Epoch 18, Validation Loss: 5.657882769902547
Epoch 19, Training Loss: 5.78484699000483
Epoch 19, Validation Loss: 5.483194986979167
Epoch 20, Training Loss: 5.79519566245701
Epoch 20, Validation Loss: 5.397272666295369
Epoch 21, Training Loss: 5.8053663917209795
Epoch 21, Validation Loss: 5.035489559173584
Epoch 22, Training Loss: 5.752378567405369
Epoch 22, Validation Loss: 5.6299824714660645
Epoch 23, Training Loss: 5.691889348237411
Epoch 23, Validation Loss: 5.462554216384888
Epoch 24, Training Loss: 5.520733377207881
Epoch 24, Validation Loss: 5.251656214396159
Epoch 25, Training Loss: 5.526333850363026
Epoch 25, Validation Loss: 5.503876447677612
Epoch 26, Training Loss: 5.384880397630774
Epoch 26, Validation Loss: 5.313105503718059
Epoch 27, Training Loss: 5.4838684745456865
Epoch 27, Validation Loss: 4.9709140459696455
Epoch 28, Training Loss: 5.290833079296609
Epoch 28, Validation Loss: 4.636099100112915
Epoch 29, Training Loss: 5.267707244209621
Epoch 29, Validation Loss: 4.9695524374643965
Epoch 30, Training Loss: 5.19075870513916
Epoch 30, Validation Loss: 4.9633893966674805
Epoch 31, Training Loss: 5.07843365876571
Epoch 31, Validation Loss: 5.017432848612468
Epoch 32, Training Loss: 5.094342128090236
Epoch 32, Validation Loss: 4.7636338869730634
Epoch 33, Training Loss: 5.082250035327414
Epoch 33, Validation Loss: 4.4147913455963135
Epoch 34, Training Loss: 5.027573855026908
Epoch 34, Validation Loss: 5.259506940841675
Epoch 35, Training Loss: 4.964332850083061
Epoch 35, Validation Loss: 5.12829836209615
Epoch 36, Training Loss: 4.941883066426152
Epoch 36, Validation Loss: 4.757048606872559
Epoch 37, Training Loss: 4.8812805051388946
Epoch 37, Validation Loss: 4.196740508079529
Epoch 38, Training Loss: 4.9019172502600625
Epoch 38, Validation Loss: 5.177704413731893
Epoch 39, Training Loss: 4.8133422188136885
Epoch 39, Validation Loss: 4.88811469078064
Epoch 40, Training Loss: 4.744864443074102
Epoch 40, Validation Loss: 4.824244340260823
Epoch 41, Training Loss: 4.684449859287428
Epoch 41, Validation Loss: 4.5137695868810015
Epoch 42, Training Loss: 4.577796873839005
Epoch 42, Validation Loss: 4.712452967961629
Epoch 43, Training Loss: 4.725418650585672
Epoch 43, Validation Loss: 4.735687176386516
Epoch 44, Training Loss: 4.507067182789678
Epoch 44, Validation Loss: 5.335361003875732
Epoch 45, Training Loss: 4.4348317021908965
Epoch 45, Validation Loss: 5.556743383407593
Epoch 46, Training Loss: 4.391557589821193
Epoch 46, Validation Loss: 4.938862085342407
Epoch 47, Training Loss: 4.4996825508449385
Epoch 47, Validation Loss: 4.744419813156128
Epoch 48, Training Loss: 4.387117945629617
Epoch 48, Validation Loss: 5.135141372680664
Epoch 49, Training Loss: 4.321713530499002
Epoch 49, Validation Loss: 4.407967646916707
Epoch 50, Training Loss: 4.324399999950243
Epoch 50, Validation Loss: 4.4032725890477495
Epoch 51, Training Loss: 4.26318911884142
Epoch 51, Validation Loss: 4.614113887151082
Epoch 52, Training Loss: 4.250328229821247
Epoch 52, Validation Loss: 5.5095070997873945
Epoch 53, Training Loss: 4.18789121379023
Epoch 53, Validation Loss: 5.819660186767578
Epoch 54, Training Loss: 4.156602652176566
Epoch 54, Validation Loss: 5.591541210810344
Epoch 55, Training Loss: 4.152282642281574
Epoch 55, Validation Loss: 5.184881846110026
Epoch 56, Training Loss: 4.065516233444214
Epoch 56, Validation Loss: 5.213317314783732
Epoch 57, Training Loss: 4.00615730492965
Epoch 57, Validation Loss: 5.8331921100616455
Epoch 58, Training Loss: 3.9533814554629116
Epoch 58, Validation Loss: 5.424338102340698
Epoch 59, Training Loss: 4.004457453022832
Epoch 59, Validation Loss: 4.8893457253774
Epoch 60, Training Loss: 4.014425485030465
Epoch 60, Validation Loss: 4.874645312627156
Epoch 61, Training Loss: 3.948675435522328
Epoch 61, Validation Loss: 4.92236590385437
Epoch 62, Training Loss: 3.9192213286524233
Epoch 62, Validation Loss: 4.426498095194499
Epoch 63, Training Loss: 3.8897229173909063
Epoch 63, Validation Loss: 4.572375138600667
Epoch 64, Training Loss: 3.897824194120324
Epoch 64, Validation Loss: 4.44900918006897
Epoch 65, Training Loss: 3.8007471561431885
Epoch 65, Validation Loss: 4.391181707382202
Epoch 66, Training Loss: 3.807614347209101
Epoch 66, Validation Loss: 4.05461851755778
Epoch 67, Training Loss: 3.9019099629443623
Epoch 67, Validation Loss: 4.295793294906616
Epoch 68, Training Loss: 3.7834468406179678
Epoch 68, Validation Loss: 4.277477264404297
Epoch 69, Training Loss: 3.8402331186377485
Epoch 69, Validation Loss: 4.710593382517497
Epoch 70, Training Loss: 3.7939379526221235
Epoch 70, Validation Loss: 4.556064923604329
Epoch 71, Training Loss: 3.6677402931710947
Epoch 71, Validation Loss: 4.059301257133484
Epoch 72, Training Loss: 3.7680826601774795
Epoch 72, Validation Loss: 3.900462826093038
Epoch 73, Training Loss: 3.6857911711153775
Epoch 73, Validation Loss: 4.023423353830974
Epoch 74, Training Loss: 3.8085749460303266
Epoch 74, Validation Loss: 4.231544653574626
Epoch 75, Training Loss: 3.7821764220362124
Epoch 75, Validation Loss: 4.185749212900798
Epoch 76, Training Loss: 3.633498005245043
Epoch 76, Validation Loss: 3.9088829358418784
Epoch 77, Training Loss: 3.639560792757117
Epoch 77, Validation Loss: 4.137555201848348
Epoch 78, Training Loss: 3.5795831265656846
Epoch 78, Validation Loss: 3.968156178792318
Epoch 79, Training Loss: 3.51152835721555
Epoch 79, Validation Loss: 4.395872036616008
Epoch 80, Training Loss: 3.707400881725809
Epoch 80, Validation Loss: 3.081713239351908
Epoch 81, Training Loss: 3.573117919590162
Epoch 81, Validation Loss: 3.8830820322036743
Epoch 82, Training Loss: 3.5437399719072427
Epoch 82, Validation Loss: 3.658513863881429
Epoch 83, Training Loss: 3.576388680416605
Epoch 83, Validation Loss: 3.2993496656417847
Epoch 84, Training Loss: 3.599872806797857
Epoch 84, Validation Loss: 3.608297427495321
Epoch 85, Training Loss: 3.571625160134357
Epoch 85, Validation Loss: 3.5399184226989746
Epoch 86, Training Loss: 3.4664425746254297
Epoch 86, Validation Loss: 3.377567927042643
Epoch 87, Training Loss: 3.499563984248949
Epoch 87, Validation Loss: 3.4888883431752524
Epoch 88, Training Loss: 3.5217917794766636
Epoch 88, Validation Loss: 3.0977776447931924
Epoch 89, Training Loss: 3.589872640112172
Epoch 89, Validation Loss: 3.1828152736028037
Epoch 90, Training Loss: 3.4332513912864355
Epoch 90, Validation Loss: 3.352437655131022
Epoch 91, Training Loss: 3.4752162021139394
Epoch 91, Validation Loss: 3.1107608874638877
Epoch 92, Training Loss: 3.4401222000951353
Epoch 92, Validation Loss: 3.341679056485494
Epoch 93, Training Loss: 3.4228478514629863
Epoch 93, Validation Loss: 2.741024136543274
Epoch 94, Training Loss: 3.426127475240956
Epoch 94, Validation Loss: 3.0889774560928345
Epoch 95, Training Loss: 3.4271325857742974
Epoch 95, Validation Loss: 2.971437652905782
Epoch 96, Training Loss: 3.4173074494237485
Epoch 96, Validation Loss: 3.000822345415751
Epoch 97, Training Loss: 3.425145087034806
Epoch 97, Validation Loss: 3.2910590966542563
Epoch 98, Training Loss: 3.4104685368745224
Epoch 98, Validation Loss: 3.051828066507975
Epoch 99, Training Loss: 3.4020088237264883
Epoch 99, Validation Loss: 3.098741809527079
Epoch 100, Training Loss: 3.407073414844015
Epoch 100, Validation Loss: 2.8006080389022827
Epoch 101, Training Loss: 3.4165094106093696
Epoch 101, Validation Loss: 3.0596676667531333
Epoch 102, Training Loss: 3.3791507016057554
Epoch 102, Validation Loss: 3.0273075501124063
Epoch 103, Training Loss: 3.260721434717593
Epoch 103, Validation Loss: 2.9822628100713096
Epoch 104, Training Loss: 3.322185246840767
Epoch 104, Validation Loss: 2.704032023747762
Epoch 105, Training Loss: 3.2445072816765825
Epoch 105, Validation Loss: 2.7428069512049356
Epoch 106, Training Loss: 3.276906511057978
Epoch 106, Validation Loss: 3.0962918996810913
Epoch 107, Training Loss: 3.2193640522334888
Epoch 107, Validation Loss: 2.7666064500808716
Epoch 108, Training Loss: 3.2976897073828657
Epoch 108, Validation Loss: 2.637693762779236
Epoch 109, Training Loss: 3.2023336576378862
Epoch 109, Validation Loss: 2.6351143519083657
Epoch 110, Training Loss: 3.2400796828062637
Epoch 110, Validation Loss: 2.8766967058181763
Epoch 111, Training Loss: 3.2213669859844707
Epoch 111, Validation Loss: 2.483332355817159
Epoch 112, Training Loss: 3.1628645088361655
Epoch 112, Validation Loss: 3.0167171955108643
Epoch 113, Training Loss: 3.212260329205057
Epoch 113, Validation Loss: 2.5551724831263223
Epoch 114, Training Loss: 3.190784402515577
Epoch 114, Validation Loss: 2.6460978984832764
Epoch 115, Training Loss: 3.223638410153596
Epoch 115, Validation Loss: 2.54957648118337
Epoch 116, Training Loss: 3.1859247580818506
Epoch 116, Validation Loss: 2.5879106521606445
Epoch 117, Training Loss: 3.1369024670642354
Epoch 117, Validation Loss: 2.722783088684082
Epoch 118, Training Loss: 3.1409994415614917
Epoch 118, Validation Loss: 2.749983469645182
Epoch 119, Training Loss: 3.1579126378764277
Epoch 119, Validation Loss: 2.3972415924072266
Epoch 120, Training Loss: 3.1700829215671704
Epoch 120, Validation Loss: 2.8411792516708374
Epoch 121, Training Loss: 3.1461276697075884
Epoch 121, Validation Loss: 2.5483597914377847
Epoch 122, Training Loss: 3.0436029641524605
Epoch 122, Validation Loss: 2.2882145245869956
Epoch 123, Training Loss: 3.0849996235059653
Epoch 123, Validation Loss: 2.6799601316452026
Epoch 124, Training Loss: 3.082736585451209
Epoch 124, Validation Loss: 2.5266074736913047
Epoch 125, Training Loss: 3.043609038643215
Epoch 125, Validation Loss: 2.386029283205668
Epoch 126, Training Loss: 3.080396838810133
Epoch 126, Validation Loss: 2.501923362414042
Epoch 127, Training Loss: 3.0894785549329673
Epoch 127, Validation Loss: 2.352148413658142
Epoch 128, Training Loss: 3.018034852069357
Epoch 128, Validation Loss: 2.3730320930480957
Epoch 129, Training Loss: 3.03998879764391
Epoch 129, Validation Loss: 2.2024088700612388
Epoch 130, Training Loss: 3.0756968104321025
Epoch 130, Validation Loss: 2.3107951879501343
Epoch 131, Training Loss: 3.0396354302116064
Epoch 131, Validation Loss: 2.303486148516337
Epoch 132, Training Loss: 3.021010740943577
Epoch 132, Validation Loss: 2.3850982983907065
Epoch 133, Training Loss: 3.0562895588252856
Epoch 133, Validation Loss: 2.214638352394104
Epoch 134, Training Loss: 3.010585069656372
Epoch 134, Validation Loss: 2.3086199363072715
Epoch 135, Training Loss: 3.055563035218612
Epoch 135, Validation Loss: 2.2176455656687417
Epoch 136, Training Loss: 3.008607936942059
Epoch 136, Validation Loss: 2.586204687754313
Epoch 137, Training Loss: 2.919459695401399
Epoch 137, Validation Loss: 2.306580066680908
Epoch 138, Training Loss: 3.021408060322637
Epoch 138, Validation Loss: 2.0766321818033853
Epoch 139, Training Loss: 2.970657742541769
Epoch 139, Validation Loss: 2.0794399976730347
Epoch 140, Training Loss: 2.976528261018836
Epoch 140, Validation Loss: 2.2794230779012046
Epoch 141, Training Loss: 3.0510356633559517
Epoch 141, Validation Loss: 2.286303480466207
Epoch 142, Training Loss: 2.9161714159924053
Epoch 142, Validation Loss: 1.9289907217025757
Epoch 143, Training Loss: 2.961140145426211
Epoch 143, Validation Loss: 2.1093429724375405
Epoch 144, Training Loss: 2.9455777769503384
Epoch 144, Validation Loss: 2.1754515171051025
Epoch 145, Training Loss: 2.9938544086788013
Epoch 145, Validation Loss: 2.096516172091166
Epoch 146, Training Loss: 2.9423602663952373
Epoch 146, Validation Loss: 2.07220329840978
Epoch 147, Training Loss: 2.864306346229885
Epoch 147, Validation Loss: 2.225348194440206
Epoch 148, Training Loss: 2.9038050693014394
Epoch 148, Validation Loss: 1.897803544998169
Epoch 149, Training Loss: 2.9406919997671377
Epoch 149, Validation Loss: 1.9741939306259155
Epoch 150, Training Loss: 2.966907957325811
Epoch 150, Validation Loss: 2.028178016344706
Epoch 151, Training Loss: 2.9366468035656474
Epoch 151, Validation Loss: 1.9923481941223145
Epoch 152, Training Loss: 2.8676399459009585
Epoch 152, Validation Loss: 1.8083737293879192
Epoch 153, Training Loss: 2.82865658013717
Epoch 153, Validation Loss: 2.057599186897278
Epoch 154, Training Loss: 2.853048055068306
Epoch 154, Validation Loss: 1.9670433203379314
Epoch 155, Training Loss: 2.942341794138369
Epoch 155, Validation Loss: 1.884505609671275
Epoch 156, Training Loss: 2.8564177285070005
Epoch 156, Validation Loss: 1.9761887391408284
Epoch 157, Training Loss: 2.849349882291711
Epoch 157, Validation Loss: 1.909088134765625
Epoch 158, Training Loss: 2.9012810043666675
Epoch 158, Validation Loss: 2.050311287244161
Epoch 159, Training Loss: 2.876902372940727
Epoch 159, Validation Loss: 2.075587193171183
Epoch 160, Training Loss: 2.8416718814684
Epoch 160, Validation Loss: 1.9044994115829468
Epoch 161, Training Loss: 2.9430094594540805
Epoch 161, Validation Loss: 1.9727285106976826
Epoch 162, Training Loss: 2.842829113421233
Epoch 162, Validation Loss: 1.8838976820309956
Epoch 163, Training Loss: 2.866148616956628
Epoch 163, Validation Loss: 1.9615851243336995
Epoch 164, Training Loss: 2.794438465781834
Epoch 164, Validation Loss: 1.859529733657837
Epoch 165, Training Loss: 2.7710181319195293
Epoch 165, Validation Loss: 1.9028956095377605
Epoch 166, Training Loss: 2.779735160910565
Epoch 166, Validation Loss: 1.9308204253514607
Epoch 167, Training Loss: 2.765279199766076
Epoch 167, Validation Loss: 1.8868597149848938
Epoch 168, Training Loss: 2.800660755323327
Epoch 168, Validation Loss: 1.8298068642616272
Epoch 169, Training Loss: 2.762402814367543
Epoch 169, Validation Loss: 1.8437727093696594
Epoch 170, Training Loss: 2.8567855461784033
Epoch 170, Validation Loss: 1.8743652900060017
Epoch 171, Training Loss: 2.751920057379681
Epoch 171, Validation Loss: 1.8433594504992168
Epoch 172, Training Loss: 2.720255654791127
Epoch 172, Validation Loss: 1.9259807070096333
Epoch 173, Training Loss: 2.7732133139734683
Epoch 173, Validation Loss: 1.753062943617503
Epoch 174, Training Loss: 2.7963502614394478
Epoch 174, Validation Loss: 1.8404005765914917
Epoch 175, Training Loss: 2.778760464295097
Epoch 175, Validation Loss: 1.7968826095263164
Epoch 176, Training Loss: 2.832919442135355
Epoch 176, Validation Loss: 1.6070581873257954
Epoch 177, Training Loss: 2.7869614103566045
Epoch 177, Validation Loss: 1.9482394456863403
Epoch 178, Training Loss: 2.7040556099103843
Epoch 178, Validation Loss: 1.6931825677553813
Epoch 179, Training Loss: 2.7765843660935112
Epoch 179, Validation Loss: 1.5744156241416931
Epoch 180, Training Loss: 2.7765754927759585
Epoch 180, Validation Loss: 1.743757685025533
Epoch 181, Training Loss: 2.7067016207653545
Epoch 181, Validation Loss: 1.689799726009369
Epoch 182, Training Loss: 2.7063667048578677
Epoch 182, Validation Loss: 1.9696882565816243
Epoch 183, Training Loss: 2.7256062756414
Epoch 183, Validation Loss: 1.7394498189290364
Epoch 184, Training Loss: 2.7016484530075737
Epoch 184, Validation Loss: 1.5863508383433025
Epoch 185, Training Loss: 2.7430471441020137
Epoch 185, Validation Loss: 1.667672614256541
Epoch 186, Training Loss: 2.705638419026914
Epoch 186, Validation Loss: 1.718922754128774
Epoch 187, Training Loss: 2.667132315428361
Epoch 187, Validation Loss: 1.6182326277097066
Epoch 188, Training Loss: 2.731079630229784
Epoch 188, Validation Loss: 1.6596076885859172
Epoch 189, Training Loss: 2.7113157873568325
Epoch 189, Validation Loss: 1.7187116543451946
Epoch 190, Training Loss: 2.702812847883805
Epoch 190, Validation Loss: 1.7178288499514263
Epoch 191, Training Loss: 2.6493123303289
Epoch 191, Validation Loss: 1.6937402884165447
Epoch 192, Training Loss: 2.695452119993127
Epoch 192, Validation Loss: 1.7677675286928813
Epoch 193, Training Loss: 2.6333291841589888
Epoch 193, Validation Loss: 1.7253106236457825
Epoch 194, Training Loss: 2.688293591789577
Epoch 194, Validation Loss: 1.6904715100924175
Epoch 195, Training Loss: 2.662613806517228
Epoch 195, Validation Loss: 1.5468708674112956
Epoch 196, Training Loss: 2.6913737421450405
Epoch 196, Validation Loss: 1.7472440203030903
Epoch 197, Training Loss: 2.7177733649378237
Epoch 197, Validation Loss: 1.6403864026069641
Epoch 198, Training Loss: 2.685820620992909
Epoch 198, Validation Loss: 1.7150452733039856
Epoch 199, Training Loss: 2.6705263386601987
Epoch 199, Validation Loss: 1.7884093125661213
Epoch 200, Training Loss: 2.6944483466770337
Epoch 200, Validation Loss: 1.7767804861068726
Epoch 201, Training Loss: 2.651588896046514
Epoch 201, Validation Loss: 1.7298799554506938
Epoch 202, Training Loss: 2.637733811917512
Epoch 202, Validation Loss: 1.5823068817456563
Epoch 203, Training Loss: 2.611103804215141
Epoch 203, Validation Loss: 1.629831035931905
Epoch 204, Training Loss: 2.626471757888794
Epoch 204, Validation Loss: 1.6889825661977131
Epoch 205, Training Loss: 2.647289732228155
Epoch 205, Validation Loss: 1.6331600348154705
Epoch 206, Training Loss: 2.619887217231419
Epoch 206, Validation Loss: 1.625426431496938
Epoch 207, Training Loss: 2.595940175263778
Epoch 207, Validation Loss: 1.642742395401001
Epoch 208, Training Loss: 2.6670899287514063
Epoch 208, Validation Loss: 1.8087008794148762
Epoch 209, Training Loss: 2.5914803276891294
Epoch 209, Validation Loss: 1.5402414600054424
Epoch 210, Training Loss: 2.623034891874894
Epoch 210, Validation Loss: 1.5072952310244243
Epoch 211, Training Loss: 2.596274883850761
Epoch 211, Validation Loss: 1.4945912559827168
Epoch 212, Training Loss: 2.6576384150463603
Epoch 212, Validation Loss: 1.577389657497406
Epoch 213, Training Loss: 2.583675239397132
Epoch 213, Validation Loss: 1.6275188326835632
Epoch 214, Training Loss: 2.6061525863149893
Epoch 214, Validation Loss: 1.6809937159220378
Epoch 215, Training Loss: 2.620449097260185
Epoch 215, Validation Loss: 1.6683709621429443
Epoch 216, Training Loss: 2.615239205567733
Epoch 216, Validation Loss: 1.6550326347351074
Epoch 217, Training Loss: 2.529531883156818
Epoch 217, Validation Loss: 1.638484537601471
Epoch 218, Training Loss: 2.6103169296098794
Epoch 218, Validation Loss: 1.5688754320144653
Epoch 219, Training Loss: 2.592264413833618
Epoch 219, Validation Loss: 1.5362562537193298
Epoch 220, Training Loss: 2.5642955199531885
Epoch 220, Validation Loss: 1.600486656030019
Epoch 221, Training Loss: 2.603016417959462
Epoch 221, Validation Loss: 1.5815839171409607
Epoch 222, Training Loss: 2.5615330468053403
Epoch 222, Validation Loss: 1.5555758674939473
Epoch 223, Training Loss: 2.563616586768109
Epoch 223, Validation Loss: 1.670164426167806
Epoch 224, Training Loss: 2.5586622486943784
Epoch 224, Validation Loss: 1.4706778128941853
Epoch 225, Training Loss: 2.5451271533966064
Epoch 225, Validation Loss: 1.5803770224253337
Epoch 226, Training Loss: 2.5126328675643257
Epoch 226, Validation Loss: 1.5010637044906616
Epoch 227, Training Loss: 2.527251678964366
Epoch 227, Validation Loss: 1.4472518960634868
Epoch 228, Training Loss: 2.555400869120722
Epoch 228, Validation Loss: 1.5214468439420064
Epoch 229, Training Loss: 2.529797481453937
Epoch 229, Validation Loss: 1.4837095538775127
Epoch 230, Training Loss: 2.5938328038091245
Epoch 230, Validation Loss: 1.5541427731513977
Epoch 231, Training Loss: 2.531161100968071
Epoch 231, Validation Loss: 1.6109384099642436
Epoch 232, Training Loss: 2.533623218536377
Epoch 232, Validation Loss: 1.5630539854367573
Epoch 233, Training Loss: 2.5767421929732612
Epoch 233, Validation Loss: 1.5427421927452087
Epoch 234, Training Loss: 2.5435796509618345
Epoch 234, Validation Loss: 1.476067801316579
Epoch 235, Training Loss: 2.5370324383611265
Epoch 235, Validation Loss: 1.4067829648653667
Epoch 236, Training Loss: 2.5296615517657735
Epoch 236, Validation Loss: 1.521375298500061
Epoch 237, Training Loss: 2.5122976095780083
Epoch 237, Validation Loss: 1.5846788883209229
Epoch 238, Training Loss: 2.5123661082723867
Epoch 238, Validation Loss: 1.5498766899108887
Epoch 239, Training Loss: 2.504432761150858
Epoch 239, Validation Loss: 1.5173516670862834
Epoch 240, Training Loss: 2.4812253765437915
Epoch 240, Validation Loss: 1.481016218662262
Epoch 241, Training Loss: 2.521016214204871
Epoch 241, Validation Loss: 1.4579505721728008
Epoch 242, Training Loss: 2.590425190718278
Epoch 242, Validation Loss: 1.452099899450938
Epoch 243, Training Loss: 2.537191815998243
Epoch 243, Validation Loss: 1.561855395634969
Epoch 244, Training Loss: 2.4924138939898945
Epoch 244, Validation Loss: 1.5411496957143147
Epoch 245, Training Loss: 2.4692050581393032
Epoch 245, Validation Loss: 1.5338396032651265
Epoch 246, Training Loss: 2.4615441612575366
Epoch 246, Validation Loss: 1.5772738059361775
Epoch 247, Training Loss: 2.4846494612486465
Epoch 247, Validation Loss: 1.5745766560236614
Epoch 248, Training Loss: 2.465627328209255
Epoch 248, Validation Loss: 1.451099197069804
Epoch 249, Training Loss: 2.467669507731562
Epoch 249, Validation Loss: 1.5514933268229167
Epoch 250, Training Loss: 2.514551815779313
Epoch 250, Validation Loss: 1.5431600213050842
Epoch 251, Training Loss: 2.5053622929946235
Epoch 251, Validation Loss: 1.5338248411814372
Epoch 252, Training Loss: 2.471154482468315
Epoch 252, Validation Loss: 1.4294729232788086
Epoch 253, Training Loss: 2.5377096404200015
Epoch 253, Validation Loss: 1.4789032340049744
Epoch 254, Training Loss: 2.469394808230193
Epoch 254, Validation Loss: 1.5466087063153584
Epoch 255, Training Loss: 2.510715660841569
Epoch 255, Validation Loss: 1.5274141629536946
Epoch 256, Training Loss: 2.469841936360235
Epoch 256, Validation Loss: 1.4632986585299175
Epoch 257, Training Loss: 2.466294050216675
Epoch 257, Validation Loss: 1.504918058713277
Epoch 258, Training Loss: 2.491579066152158
Epoch 258, Validation Loss: 1.5363964637120564
Epoch 259, Training Loss: 2.440987493680871
Epoch 259, Validation Loss: 1.484277069568634
Epoch 260, Training Loss: 2.47804508001908
Epoch 260, Validation Loss: 1.5427892605463664
Epoch 261, Training Loss: 2.458281558492909
Epoch 261, Validation Loss: 1.528926173845927
Epoch 262, Training Loss: 2.4390448383663013
Epoch 262, Validation Loss: 1.4311073422431946
Epoch 263, Training Loss: 2.422526805297188
Epoch 263, Validation Loss: 1.4283461968104045
Epoch 264, Training Loss: 2.4685215846351953
Epoch 264, Validation Loss: 1.4607419967651367
Epoch 265, Training Loss: 2.4664555217908775
Epoch 265, Validation Loss: 1.4867463906606038
Epoch 266, Training Loss: 2.3661262159762173
Epoch 266, Validation Loss: 1.468353509902954
Epoch 267, Training Loss: 2.4567637236221977
Epoch 267, Validation Loss: 1.4520227909088135
Epoch 268, Training Loss: 2.474279144535894
Epoch 268, Validation Loss: 1.4106765588124592
Epoch 269, Training Loss: 2.3974046603493067
Epoch 269, Validation Loss: 1.4977186719576518
Epoch 270, Training Loss: 2.4426499035047446
Epoch 270, Validation Loss: 1.421949843565623
Epoch 271, Training Loss: 2.447714142177416
Epoch 271, Validation Loss: 1.5126388470331829
Epoch 272, Training Loss: 2.406257769335871
Epoch 272, Validation Loss: 1.498878002166748
Epoch 273, Training Loss: 2.4567384823508887
Epoch 273, Validation Loss: 1.5009918014208476
Epoch 274, Training Loss: 2.410802022270534
Epoch 274, Validation Loss: 1.5295274257659912
Epoch 275, Training Loss: 2.389756679534912
Epoch 275, Validation Loss: 1.4344069560368855
Epoch 276, Training Loss: 2.4721107275589653
Epoch 276, Validation Loss: 1.5505277315775554
Epoch 277, Training Loss: 2.4443276550458823
Epoch 277, Validation Loss: 1.4804975986480713
Epoch 278, Training Loss: 2.445731494737708
Epoch 278, Validation Loss: 1.4396352767944336
Epoch 279, Training Loss: 2.4475897913393765
Epoch 279, Validation Loss: 1.5089754462242126
Epoch 280, Training Loss: 2.336937106173971
Epoch 280, Validation Loss: 1.534985105196635
Epoch 281, Training Loss: 2.4149699729421865
Epoch 281, Validation Loss: 1.4613931775093079
Epoch 282, Training Loss: 2.417814420617145
Epoch 282, Validation Loss: 1.4303881724675496
Epoch 283, Training Loss: 2.374472037605617
Epoch 283, Validation Loss: 1.4664150476455688
Epoch 284, Training Loss: 2.4128626325856084
Epoch 284, Validation Loss: 1.4653761982917786
Training done !
Test Loss: 1.3976077288389206
1.4290299980978376 = MSE after run