Detecting Muon Momentum in the CMS Experiment at CERN using Deep Learning
Posted on Thu 19 November 2020 in posts • 67 min read
In the following notebook, we will apply several Deep Learning approaches to properly predict muon momentum. We will use Monte-Carlo simulated data from the Cathode Strip Chambers (CSC) at the CMS experiment. These chambers detect muon particles in the outer layer of the CMS detector, allowing us to store information about the hit locations.
The dataset contains more than 3 million muon events generated using Pythia.
Importing Libraries¶
Let's import the libraries that we will need in this project:
First, let's import the libraries that we will need for this project:
- Numpy - for vectors manipulation and operations
- Scikit-Learn - for building and training predictive models
- Matplotlib and Seaborn - for visualizing various types of plots
- Pandas - for data cleaning and analysis
import numpy as np
import sklearn
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
np.random.seed(48) # set the random seed for reproducible results
For training Neural Networks, we will be using Tensorflow 2 which is a famous Deep Learning library used for developing, training and deploying deep learning models.
Let's import Tensorflow and check its version:
import tensorflow as tf
print(tf.__version__)
2.3.0
Retrieving the Dataset¶
It seems our data is in NPZ format, therefore will use numpy.load() to load the data files:
data=np.load("histos_tba.npz")
The data file contain two arrays: variables and parameters.
data["variables"].shape,data["parameters"].shape
((3272341, 87), (3272341, 3))
We will extract the data from the variabes array for the muon hits detected at the CSC chambers only:
def delete_not_csc(data):
muon_hits=data['variables']
indices_to_del=[]
for i in range(muon_hits.shape[1]):
if(i%12>=5 and i<84):
indices_to_del.append(i)
return np.delete(muon_hits,indices_to_del,axis=1)
muon_hits=delete_not_csc(data)
muon_hits.shape
(3272341, 38)
Next, let's prepare the columns names for each feature and combine the two datasets into a Pandas DataFrame for easier analysis:
original_columns_names=["Phi angle","Theta angle","Bend angle","Time", "Ring",
"Front/Rear","Mask"]
columns_names=[]
for element in enumerate(original_columns_names):
for i in range(5):
columns_names.append(str(element[1])+str(i))
columns_names.append("XRoad0")
columns_names.append("XRoad1")
columns_names.append("XRoad2")
muon_hits_df=pd.DataFrame(muon_hits,columns=columns_names)
muon_hits_df["q/pt"]=data["parameters"][:,0]
muon_hits_df["Phi_angle"]=data["parameters"][:,1]
muon_hits_df["Eta_angle"]=data["parameters"][:,2]
Exploratory Data Analysis (EDA)¶
Let's start our EDA by taking a look at the columns types:
muon_hits_df.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 3272341 entries, 0 to 3272340 Data columns (total 41 columns): # Column Dtype --- ------ ----- 0 Phi angle0 float32 1 Phi angle1 float32 2 Phi angle2 float32 3 Phi angle3 float32 4 Phi angle4 float32 5 Theta angle0 float32 6 Theta angle1 float32 7 Theta angle2 float32 8 Theta angle3 float32 9 Theta angle4 float32 10 Bend angle0 float32 11 Bend angle1 float32 12 Bend angle2 float32 13 Bend angle3 float32 14 Bend angle4 float32 15 Time0 float32 16 Time1 float32 17 Time2 float32 18 Time3 float32 19 Time4 float32 20 Ring0 float32 21 Ring1 float32 22 Ring2 float32 23 Ring3 float32 24 Ring4 float32 25 Front/Rear0 float32 26 Front/Rear1 float32 27 Front/Rear2 float32 28 Front/Rear3 float32 29 Front/Rear4 float32 30 Mask0 float32 31 Mask1 float32 32 Mask2 float32 33 Mask3 float32 34 Mask4 float32 35 XRoad0 float32 36 XRoad1 float32 37 XRoad2 float32 38 q/pt float32 39 Phi_angle float32 40 Eta_angle float32 dtypes: float32(41) memory usage: 511.8 MB
We notice that all our variables have float data type, which we will prove helpful later on when preprocessing the data.
Next, let's generate some useful statistics about our features:
muon_hits_df.describe()
Phi angle0 | Phi angle1 | Phi angle2 | Phi angle3 | Phi angle4 | Theta angle0 | Theta angle1 | Theta angle2 | Theta angle3 | Theta angle4 | Bend angle0 | Bend angle1 | Bend angle2 | Bend angle3 | Bend angle4 | Time0 | Time1 | Time2 | Time3 | Time4 | Ring0 | Ring1 | Ring2 | Ring3 | Ring4 | Front/Rear0 | Front/Rear1 | Front/Rear2 | Front/Rear3 | Front/Rear4 | Mask0 | Mask1 | Mask2 | Mask3 | Mask4 | XRoad0 | XRoad1 | XRoad2 | q/pt | Phi_angle | Eta_angle | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
count | 2.300975e+06 | 870666.000000 | 2.789617e+06 | 2.605933e+06 | 2.439864e+06 | 2.300975e+06 | 870666.000000 | 2.789617e+06 | 2.605933e+06 | 2.439864e+06 | 2.300975e+06 | 870666.000000 | 2.789617e+06 | 2.605933e+06 | 2.439864e+06 | 2.300975e+06 | 870666.000000 | 2.789617e+06 | 2.605933e+06 | 2.439864e+06 | 2.300975e+06 | 870666.0 | 2.789617e+06 | 2.605933e+06 | 2.439864e+06 | 2.300975e+06 | 870666.000000 | 2.789617e+06 | 2.605933e+06 | 2.439864e+06 | 3.272341e+06 | 3.272341e+06 | 3.272341e+06 | 3.272341e+06 | 3.272341e+06 | 3.272341e+06 | 3.272341e+06 | 3.272341e+06 | 3.272341e+06 | 3.272341e+06 | 3.272341e+06 |
mean | 2.898916e+03 | 2942.738281 | 2.884543e+03 | 2.895544e+03 | 2.901002e+03 | 2.513442e+01 | 62.888813 | 3.345796e+01 | 3.376106e+01 | 3.479657e+01 | 1.771088e-02 | -0.075262 | 2.404990e-03 | 7.820615e-04 | 1.634517e-03 | -1.554471e-02 | -0.029150 | -2.238408e-02 | -2.388741e-02 | -1.674315e-02 | 2.208553e+00 | 2.0 | 1.234539e+00 | 1.328101e+00 | 1.416067e+00 | 4.993205e-01 | 0.465420 | 5.033727e-01 | 4.964771e-01 | 5.009226e-01 | 2.968413e-01 | 7.339318e-01 | 1.475164e-01 | 2.036487e-01 | 2.543980e-01 | 3.977467e+00 | 2.312063e+00 | 9.032854e+01 | -1.427713e-04 | -4.566138e-04 | -1.287372e-03 |
std | 1.063501e+03 | 1061.839233 | 1.087773e+03 | 1.087355e+03 | 1.084414e+03 | 1.279694e+01 | 10.498766 | 2.147677e+01 | 2.178618e+01 | 2.156608e+01 | 1.678599e+01 | 19.846676 | 3.044079e-01 | 3.709632e-01 | 3.650869e-01 | 1.228342e-01 | 0.167572 | 1.469481e-01 | 1.515253e-01 | 1.274973e-01 | 1.475620e+00 | 0.0 | 4.290860e-01 | 4.634236e-01 | 4.966435e-01 | 5.000001e-01 | 0.499028 | 4.999973e-01 | 4.999975e-01 | 5.000001e-01 | 4.557488e-01 | 4.387615e-01 | 3.566231e-01 | 3.976815e-01 | 4.353520e-01 | 1.956127e+00 | 1.881727e+00 | 3.388365e+01 | 2.625431e-01 | 1.811694e+00 | 1.889245e+00 |
min | 6.620000e+02 | 690.000000 | 9.600000e+01 | 9.400000e+01 | 9.600000e+01 | 5.000000e+00 | 46.000000 | 5.000000e+00 | 5.000000e+00 | 6.000000e+00 | -1.329103e+02 | -81.000000 | -1.000000e+00 | -1.000000e+00 | -1.000000e+00 | -1.000000e+00 | -1.000000 | -1.000000e+00 | -1.000000e+00 | -1.000000e+00 | 1.000000e+00 | 2.0 | 1.000000e+00 | 1.000000e+00 | 1.000000e+00 | 0.000000e+00 | 0.000000 | 0.000000e+00 | 0.000000e+00 | 0.000000e+00 | 0.000000e+00 | 0.000000e+00 | 0.000000e+00 | 0.000000e+00 | 0.000000e+00 | 0.000000e+00 | 0.000000e+00 | 2.000000e+00 | -4.999985e-01 | -3.141591e+00 | -2.499856e+00 |
25% | 2.005000e+03 | 2032.000000 | 1.980000e+03 | 1.998000e+03 | 2.000000e+03 | 1.400000e+01 | 54.000000 | 1.600000e+01 | 1.600000e+01 | 1.600000e+01 | -1.200000e+01 | -14.000000 | 0.000000e+00 | 0.000000e+00 | 0.000000e+00 | 0.000000e+00 | 0.000000 | 0.000000e+00 | 0.000000e+00 | 0.000000e+00 | 1.000000e+00 | 2.0 | 1.000000e+00 | 1.000000e+00 | 1.000000e+00 | 0.000000e+00 | 0.000000 | 0.000000e+00 | 0.000000e+00 | 0.000000e+00 | 0.000000e+00 | 0.000000e+00 | 0.000000e+00 | 0.000000e+00 | 0.000000e+00 | 3.000000e+00 | 1.000000e+00 | 6.200000e+01 | -2.194096e-01 | -1.571401e+00 | -1.880253e+00 |
50% | 2.891000e+03 | 2937.000000 | 2.882000e+03 | 2.896000e+03 | 2.904000e+03 | 2.300000e+01 | 62.000000 | 2.800000e+01 | 2.800000e+01 | 2.900000e+01 | 0.000000e+00 | 0.000000 | 0.000000e+00 | 0.000000e+00 | 0.000000e+00 | 0.000000e+00 | 0.000000 | 0.000000e+00 | 0.000000e+00 | 0.000000e+00 | 1.000000e+00 | 2.0 | 1.000000e+00 | 1.000000e+00 | 1.000000e+00 | 0.000000e+00 | 0.000000 | 1.000000e+00 | 0.000000e+00 | 1.000000e+00 | 0.000000e+00 | 1.000000e+00 | 0.000000e+00 | 0.000000e+00 | 0.000000e+00 | 4.000000e+00 | 2.000000e+00 | 9.000000e+01 | -4.459424e-04 | 7.330791e-04 | -1.202088e+00 |
75% | 3.792000e+03 | 3853.000000 | 3.791000e+03 | 3.795000e+03 | 3.800000e+03 | 3.500000e+01 | 71.000000 | 4.700000e+01 | 5.200000e+01 | 5.200000e+01 | 1.200000e+01 | 14.000000 | 0.000000e+00 | 0.000000e+00 | 0.000000e+00 | 0.000000e+00 | 0.000000 | 0.000000e+00 | 0.000000e+00 | 0.000000e+00 | 4.000000e+00 | 2.0 | 1.000000e+00 | 2.000000e+00 | 2.000000e+00 | 1.000000e+00 | 1.000000 | 1.000000e+00 | 1.000000e+00 | 1.000000e+00 | 1.000000e+00 | 1.000000e+00 | 0.000000e+00 | 0.000000e+00 | 1.000000e+00 | 5.000000e+00 | 4.000000e+00 | 1.190000e+02 | 2.191571e-01 | 1.569422e+00 | 1.879843e+00 |
max | 4.987000e+03 | 4950.000000 | 4.954000e+03 | 4.952000e+03 | 4.952000e+03 | 5.300000e+01 | 88.000000 | 8.800000e+01 | 8.800000e+01 | 8.800000e+01 | 1.421401e+02 | 70.000000 | 1.000000e+00 | 1.000000e+00 | 1.000000e+00 | 0.000000e+00 | 0.000000 | 0.000000e+00 | 0.000000e+00 | 0.000000e+00 | 4.000000e+00 | 2.0 | 2.000000e+00 | 2.000000e+00 | 2.000000e+00 | 1.000000e+00 | 1.000000 | 1.000000e+00 | 1.000000e+00 | 1.000000e+00 | 1.000000e+00 | 1.000000e+00 | 1.000000e+00 | 1.000000e+00 | 1.000000e+00 | 8.000000e+00 | 5.000000e+00 | 1.540000e+02 | 4.999996e-01 | 3.141592e+00 | 2.499996e+00 |
We notice that some features such as Phi angle1, Theta angle1 and Bend angle1 have some missing values.
Let's find the exact percentage of null values for each feature:
null_perc=((muon_hits_df.isnull().
sum()/muon_hits_df.
shape[0]
)*100).sort_values(ascending=False)
fig = plt.figure(figsize = (10,5))
ax = fig.gca()
ax.set_xlabel("features names")
ax.set_ylabel("missing values percentage")
null_perc.plot.bar(ax=ax)
plt.show()
We notice that Front/Rear1, Phi angle1, Theta angle1, Bend angle1, Time1 and Ring1 features have more than 70% missing values!
Next, let's take a look on the data distribution of each feature:
fig = plt.figure(figsize = (30,40))
ax = fig.gca()
muon_hits_df.hist(ax=ax)
plt.show()
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:3: UserWarning: To output multiple subplots, the figure containing the passed axes is being cleared This is separate from the ipykernel package so we can avoid doing imports until
Wow, those are a LOT of plots! It is clear from these plots that some features such as Theta Angle1,Theta Angle2, Theta Angle3 and XRoad1 can benefit from being transformed into Normal Distribution (Standardization).
Next, let's investigate the effect of each feature on the target feature q/pt:
corr=muon_hits_df.corr()
f, ax = plt.subplots(figsize=(11, 9))
cmap = sns.diverging_palette(230, 20, as_cmap=True)
mask = np.triu(np.ones_like(corr, dtype=bool))
sns.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
<matplotlib.axes._subplots.AxesSubplot at 0x7f6a2cb0de48>
Framing the Problem as a Classification Task¶
Instead of trying to apply regression to predict the muon momenta values (q/pt), let's try to cluster the momenta into 4 classes: 0-10 GeV, 10-30 GeV, 30-100 GeV and >100 GeV, and instead frame the problem as a classification task.
Let's investigate the q/pt column and separate it from the data:
pt_inverse_labels=muon_hits_df["q/pt"]
muon_hits_df_x=muon_hits_df.drop("q/pt",axis=1)
muon_hits_df["q/pt"].describe()
count 3.272341e+06 mean -1.427713e-04 std 2.625431e-01 min -4.999985e-01 25% -2.194096e-01 50% -4.459424e-04 75% 2.191571e-01 max 4.999996e-01 Name: q/pt, dtype: float64
We will use the absolute value of the reciprocal of the target feature (pt/q) to group the momenta into the previously mentioned classes:
pt_labels=abs((1/pt_inverse_labels))
pt_labels.describe()
count 3.272341e+06 mean 1.859705e+01 std 1.257423e+02 min 2.000001e+00 25% 2.998728e+00 50% 4.560210e+00 75% 9.115835e+00 max 6.989345e+03 Name: q/pt, dtype: float64
Let's visualize the data distribution of this feature:
pt_labels.hist(bins=1000)
plt.xlim([0,150])
plt.title("Data Distribution of p_T labels")
plt.show()
That is a very interesting plot! We can clearly see that there are distinct groups of points of different magnitudes.
We can now group the values into 4 groups: 0-10 GeV, 10-30 GeV, 30-100 GeV and >100 GeV using the cut method from Pandas.
bins = pd.IntervalIndex.from_tuples([(0, 10), (10, 30), (30, 100),(100,pt_labels.max()+1)])
org_pt_labels_groups=pd.cut(pt_labels, bins)
Let's plot the grouped data now:
target_counts=org_pt_labels_groups.value_counts()
target_counts.plot.bar()
plt.title("Distribution of classes in target feature")
plt.show()
It is very clear that the classes are not balanced, therefore it is important to balance those classes whenever we start training our Neural Network to avoid any bias.
Splitting the Data¶
Next, let's seperate the data into train and test data, we will use a 90%-10% split.
As we already noticed, the data is highly imbalanced, therefore it is important to perform the following two tasks:
- Add class weights to balance out the classes
- Make the test set representative of the classes
To split the train and test data, we will use the StratifiedShuffleSplit method in Scikit-Learn to make both the train data and test data representative of the target classes distribution:
from sklearn.model_selection import StratifiedShuffleSplit
split = StratifiedShuffleSplit(n_splits=1, test_size=0.1, random_state=42)
for train_index, test_index in split.split(muon_hits_df_x, org_pt_labels_groups):
muon_hits_x_train_set = muon_hits_df_x.loc[train_index]
pt_labels_groups_y_train_set=org_pt_labels_groups[train_index]
muon_hits_x_test_set = muon_hits_df_x.loc[test_index]
pt_labels_groups_y_test_set=org_pt_labels_groups[test_index]
Let's verify if the data has the same proportions for the target label in the train and test sets:
def target_proportions(data):
return data.value_counts()/len(data)*100
proportions_df=pd.DataFrame({"Original": target_proportions(org_pt_labels_groups),
"Train":target_proportions(pt_labels_groups_y_train_set),
"Test":target_proportions(pt_labels_groups_y_test_set)},
)
proportions_df
Original | Train | Test | |
---|---|---|---|
(0.0, 10.0] | 77.201062 | 77.201058 | 77.201094 |
(10.0, 30.0] | 15.207951 | 15.207942 | 15.208031 |
(30.0, 100.0] | 5.331871 | 5.331862 | 5.331948 |
(100.0, 6990.3447265625] | 2.259117 | 2.259138 | 2.258927 |
Neat! Now that we are sure that the train and test data are representative of the original dataset, we can now proceed to the preprocessing phase.
Preparing data for training¶
As we noticed earlier, we need delete the columns with percentage of missing values bigger than 70%:
def delete_columns(df,perc):
null_perc=(df.isnull().sum()/df.shape[0])*100
col_to_del= [col for col in df.columns if(((null_perc)>perc)[col])]
print("columns deleted:", col_to_del)
return df.drop(col_to_del,axis=1)
muon_hits_x_train_set_1=delete_columns(muon_hits_x_train_set,70)
columns deleted: ['Phi angle1', 'Theta angle1', 'Bend angle1', 'Time1', 'Ring1', 'Front/Rear1']
muon_hits_x_train_set_1.shape
(2945106, 34)
For all other features with percentage of null values less than 70%, we will replace the null values with the mean value of their corresponding columns:
def replace_missing_with_mean(df):
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(missing_values=np.nan, strategy='mean')
imputed=imputer.fit_transform(df)
return pd.DataFrame(imputed,columns=df.columns)
muon_hits_x_train_set_2=replace_missing_with_mean(muon_hits_x_train_set_1)
null_perc_2=(muon_hits_x_train_set_2.isnull().
sum()/muon_hits_x_train_set_2.
shape[0])*100
null_perc_2.sort_values(ascending=False)
Eta_angle 0.0 Bend angle0 0.0 Time3 0.0 Time2 0.0 Time0 0.0 Bend angle4 0.0 Bend angle3 0.0 Bend angle2 0.0 Theta angle4 0.0 Phi_angle 0.0 Theta angle3 0.0 Theta angle2 0.0 Theta angle0 0.0 Phi angle4 0.0 Phi angle3 0.0 Phi angle2 0.0 Time4 0.0 Ring0 0.0 Ring2 0.0 Ring3 0.0 Ring4 0.0 Front/Rear0 0.0 Front/Rear2 0.0 Front/Rear3 0.0 Front/Rear4 0.0 Mask0 0.0 Mask1 0.0 Mask2 0.0 Mask3 0.0 Mask4 0.0 XRoad0 0.0 XRoad1 0.0 XRoad2 0.0 Phi angle0 0.0 dtype: float64
Now we are sure that we filled all missing values.
Let's next create the preprocessing pipeline and convert the target feature into one hot encoded variables.
def preprocess_pipeline(df_x,df_y):
#delete columns with missing values more than 70%
df_x=delete_columns(df_x,70)
#impute the other columns with missing values with their corresponding means
df_x=replace_missing_with_mean(df_x)
#standardize the data
from sklearn.preprocessing import StandardScaler
scaler=StandardScaler()
x=scaler.fit_transform(df_x)
#convert labels into dummy variables (one hot encoded)
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
encoded_y = encoder.fit_transform(df_y)
print(encoder.classes_)
from sklearn.preprocessing import OneHotEncoder
ohe=OneHotEncoder(sparse=False)
ohe_y=ohe.fit_transform(encoded_y.reshape(-1,1))
print(ohe.categories_)
return x,ohe_y,encoded_y
muon_hits_x_train_values,pt_labels_ohe_y_train,y_encoded_train=preprocess_pipeline(muon_hits_x_train_set,pt_labels_groups_y_train_set)
columns deleted: ['Phi angle1', 'Theta angle1', 'Bend angle1', 'Time1', 'Ring1', 'Front/Rear1'] [Interval(0.0, 10.0, closed='right') Interval(10.0, 30.0, closed='right') Interval(30.0, 100.0, closed='right') Interval(100.0, 6990.3447265625, closed='right')] [array([0, 1, 2, 3])]
muon_hits_x_test_values,pt_labels_ohe_y_test,y_encoded_test=preprocess_pipeline(muon_hits_x_test_set,pt_labels_groups_y_test_set)
columns deleted: ['Phi angle1', 'Theta angle1', 'Bend angle1', 'Time1', 'Ring1', 'Front/Rear1'] [Interval(0.0, 10.0, closed='right') Interval(10.0, 30.0, closed='right') Interval(30.0, 100.0, closed='right') Interval(100.0, 6990.3447265625, closed='right')] [array([0, 1, 2, 3])]
Now our datasets are ready for training and testing!
One last task to do before training, which is generating the classes weights:
#option 1
from sklearn.utils import class_weight
class_weights = class_weight.compute_class_weight('balanced',
np.unique(y_encoded_train),
y_encoded_train)
class_weights
array([ 0.32382976, 1.64387796, 4.68879315, 11.06616918])
#option 2
class_weights=len(y_encoded_train)/(4*(np.unique(y_encoded_train,return_counts=True)[1]))
classes_weights={i: class_weights[i] for i in range(len(class_weights))}
classes_weights
{0: 0.3238297576631087, 1: 1.6438779611065217, 2: 4.688793152857116, 3: 11.066169176661557}
Building the Neural Network¶
We will use Tensorflow's Keras library to build the Neural Network
Building a Fully Connected Neural Network¶
Let's first build and compile the model:
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense,Dropout
model=Sequential()
model.add(Dense(512,input_dim=(muon_hits_x_train_values.shape[1]),activation='relu'))
model.add(Dense(256,activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(128,activation='relu'))
model.add(Dense(64,activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(32,activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(4,activation='softmax'))
model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy'])
Let's take a closer look at the model architecture:
model.summary()
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense (Dense) (None, 512) 17920 _________________________________________________________________ dense_1 (Dense) (None, 256) 131328 _________________________________________________________________ dropout (Dropout) (None, 256) 0 _________________________________________________________________ dense_2 (Dense) (None, 128) 32896 _________________________________________________________________ dense_3 (Dense) (None, 64) 8256 _________________________________________________________________ dropout_1 (Dropout) (None, 64) 0 _________________________________________________________________ dense_4 (Dense) (None, 32) 2080 _________________________________________________________________ dropout_2 (Dropout) (None, 32) 0 _________________________________________________________________ dense_5 (Dense) (None, 4) 132 ================================================================= Total params: 192,612 Trainable params: 192,612 Non-trainable params: 0 _________________________________________________________________
Next, we will set some callbacks for the model including a Checkpoint callback and Early Stopping callback that both monitor the model's validation accuracy:
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.callbacks import EarlyStopping
filepath="classifier_weights2-improvement-{epoch:02d}-{val_accuracy:.2f}.hdf5"
checkpoint1 = ModelCheckpoint(filepath, monitor='val_accuracy', verbose=1, save_best_only=True, mode='max')
checkpoint2= EarlyStopping( monitor='val_accuracy', patience=3)
callbacks_list = [checkpoint1,checkpoint1]
Now we can finally start training!
We will use a 25% validation split, batch size of 500 and will train for 100 epochs.
history = model.fit(muon_hits_x_train_values, pt_labels_ohe_y_train,
validation_split=0.25,
epochs=200,
batch_size=500,
callbacks=callbacks_list,
class_weight=classes_weights)
Epoch 1/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.9673 - accuracy: 0.7593 Epoch 00001: val_accuracy improved from -inf to 0.76394, saving model to classifier_weights2-improvement-01-0.76.hdf5 Epoch 00001: val_accuracy did not improve from 0.76394 4418/4418 [==============================] - 91s 21ms/step - loss: 0.9673 - accuracy: 0.7593 - val_loss: 0.5584 - val_accuracy: 0.7639 Epoch 2/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.9354 - accuracy: 0.7702 Epoch 00002: val_accuracy improved from 0.76394 to 0.77115, saving model to classifier_weights2-improvement-02-0.77.hdf5 Epoch 00002: val_accuracy did not improve from 0.77115 4418/4418 [==============================] - 91s 21ms/step - loss: 0.9354 - accuracy: 0.7702 - val_loss: 0.5633 - val_accuracy: 0.7711 Epoch 3/200 4418/4418 [==============================] - ETA: 0s - loss: 0.9298 - accuracy: 0.7716 Epoch 00003: val_accuracy improved from 0.77115 to 0.77407, saving model to classifier_weights2-improvement-03-0.77.hdf5 Epoch 00003: val_accuracy did not improve from 0.77407 4418/4418 [==============================] - 95s 22ms/step - loss: 0.9298 - accuracy: 0.7716 - val_loss: 0.5344 - val_accuracy: 0.7741 Epoch 4/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.9159 - accuracy: 0.7738 Epoch 00004: val_accuracy improved from 0.77407 to 0.78281, saving model to classifier_weights2-improvement-04-0.78.hdf5 Epoch 00004: val_accuracy did not improve from 0.78281 4418/4418 [==============================] - 92s 21ms/step - loss: 0.9159 - accuracy: 0.7738 - val_loss: 0.5153 - val_accuracy: 0.7828 Epoch 5/200 4418/4418 [==============================] - ETA: 0s - loss: 0.8343 - accuracy: 0.7938 Epoch 00005: val_accuracy improved from 0.78281 to 0.80337, saving model to classifier_weights2-improvement-05-0.80.hdf5 Epoch 00005: val_accuracy did not improve from 0.80337 4418/4418 [==============================] - 90s 20ms/step - loss: 0.8343 - accuracy: 0.7938 - val_loss: 0.4637 - val_accuracy: 0.8034 Epoch 6/200 4418/4418 [==============================] - ETA: 0s - loss: 0.7896 - accuracy: 0.8026 Epoch 00006: val_accuracy improved from 0.80337 to 0.80959, saving model to classifier_weights2-improvement-06-0.81.hdf5 Epoch 00006: val_accuracy did not improve from 0.80959 4418/4418 [==============================] - 94s 21ms/step - loss: 0.7896 - accuracy: 0.8026 - val_loss: 0.4423 - val_accuracy: 0.8096 Epoch 7/200 4418/4418 [==============================] - ETA: 0s - loss: 0.7749 - accuracy: 0.8073 Epoch 00007: val_accuracy improved from 0.80959 to 0.81298, saving model to classifier_weights2-improvement-07-0.81.hdf5 Epoch 00007: val_accuracy did not improve from 0.81298 4418/4418 [==============================] - 88s 20ms/step - loss: 0.7749 - accuracy: 0.8073 - val_loss: 0.4559 - val_accuracy: 0.8130 Epoch 8/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.7646 - accuracy: 0.8099 Epoch 00008: val_accuracy did not improve from 0.81298 Epoch 00008: val_accuracy did not improve from 0.81298 4418/4418 [==============================] - 88s 20ms/step - loss: 0.7646 - accuracy: 0.8099 - val_loss: 0.4592 - val_accuracy: 0.8104 Epoch 9/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.7573 - accuracy: 0.8124 Epoch 00009: val_accuracy did not improve from 0.81298 Epoch 00009: val_accuracy did not improve from 0.81298 4418/4418 [==============================] - 89s 20ms/step - loss: 0.7573 - accuracy: 0.8124 - val_loss: 0.4624 - val_accuracy: 0.8085 Epoch 10/200 4418/4418 [==============================] - ETA: 0s - loss: 0.7539 - accuracy: 0.8128 Epoch 00010: val_accuracy did not improve from 0.81298 Epoch 00010: val_accuracy did not improve from 0.81298 4418/4418 [==============================] - 93s 21ms/step - loss: 0.7539 - accuracy: 0.8128 - val_loss: 0.4635 - val_accuracy: 0.8106 Epoch 11/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.7482 - accuracy: 0.8145 Epoch 00011: val_accuracy did not improve from 0.81298 Epoch 00011: val_accuracy did not improve from 0.81298 4418/4418 [==============================] - 92s 21ms/step - loss: 0.7482 - accuracy: 0.8145 - val_loss: 0.4657 - val_accuracy: 0.8091 Epoch 12/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.7413 - accuracy: 0.8146 Epoch 00012: val_accuracy improved from 0.81298 to 0.81855, saving model to classifier_weights2-improvement-12-0.82.hdf5 Epoch 00012: val_accuracy did not improve from 0.81855 4418/4418 [==============================] - 89s 20ms/step - loss: 0.7413 - accuracy: 0.8146 - val_loss: 0.4390 - val_accuracy: 0.8185 Epoch 13/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.7392 - accuracy: 0.8161 Epoch 00013: val_accuracy did not improve from 0.81855 Epoch 00013: val_accuracy did not improve from 0.81855 4418/4418 [==============================] - 92s 21ms/step - loss: 0.7392 - accuracy: 0.8161 - val_loss: 0.4667 - val_accuracy: 0.8143 Epoch 14/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.7366 - accuracy: 0.8172 Epoch 00014: val_accuracy improved from 0.81855 to 0.82150, saving model to classifier_weights2-improvement-14-0.82.hdf5 Epoch 00014: val_accuracy did not improve from 0.82150 4418/4418 [==============================] - 91s 21ms/step - loss: 0.7366 - accuracy: 0.8172 - val_loss: 0.4311 - val_accuracy: 0.8215 Epoch 15/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.7353 - accuracy: 0.8164 Epoch 00015: val_accuracy did not improve from 0.82150 Epoch 00015: val_accuracy did not improve from 0.82150 4418/4418 [==============================] - 91s 21ms/step - loss: 0.7353 - accuracy: 0.8164 - val_loss: 0.4500 - val_accuracy: 0.8124 Epoch 16/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.7307 - accuracy: 0.8174 Epoch 00016: val_accuracy did not improve from 0.82150 Epoch 00016: val_accuracy did not improve from 0.82150 4418/4418 [==============================] - 91s 21ms/step - loss: 0.7306 - accuracy: 0.8174 - val_loss: 0.4410 - val_accuracy: 0.8155 Epoch 17/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.7278 - accuracy: 0.8189 Epoch 00017: val_accuracy did not improve from 0.82150 Epoch 00017: val_accuracy did not improve from 0.82150 4418/4418 [==============================] - 93s 21ms/step - loss: 0.7278 - accuracy: 0.8189 - val_loss: 0.4358 - val_accuracy: 0.8186 Epoch 18/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.7255 - accuracy: 0.8186 Epoch 00018: val_accuracy did not improve from 0.82150 Epoch 00018: val_accuracy did not improve from 0.82150 4418/4418 [==============================] - 94s 21ms/step - loss: 0.7255 - accuracy: 0.8186 - val_loss: 0.4360 - val_accuracy: 0.8170 Epoch 19/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.7242 - accuracy: 0.8177 Epoch 00019: val_accuracy did not improve from 0.82150 Epoch 00019: val_accuracy did not improve from 0.82150 4418/4418 [==============================] - 90s 20ms/step - loss: 0.7242 - accuracy: 0.8177 - val_loss: 0.4554 - val_accuracy: 0.8125 Epoch 20/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.7227 - accuracy: 0.8190 Epoch 00020: val_accuracy did not improve from 0.82150 Epoch 00020: val_accuracy did not improve from 0.82150 4418/4418 [==============================] - 90s 20ms/step - loss: 0.7227 - accuracy: 0.8190 - val_loss: 0.4464 - val_accuracy: 0.8158 Epoch 21/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.7190 - accuracy: 0.8190 Epoch 00021: val_accuracy did not improve from 0.82150 Epoch 00021: val_accuracy did not improve from 0.82150 4418/4418 [==============================] - 93s 21ms/step - loss: 0.7189 - accuracy: 0.8190 - val_loss: 0.4375 - val_accuracy: 0.8176 Epoch 22/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.7161 - accuracy: 0.8198 Epoch 00022: val_accuracy did not improve from 0.82150 Epoch 00022: val_accuracy did not improve from 0.82150 4418/4418 [==============================] - 91s 21ms/step - loss: 0.7161 - accuracy: 0.8198 - val_loss: 0.4517 - val_accuracy: 0.8116 Epoch 23/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.7152 - accuracy: 0.8203 Epoch 00023: val_accuracy improved from 0.82150 to 0.82385, saving model to classifier_weights2-improvement-23-0.82.hdf5 Epoch 00023: val_accuracy did not improve from 0.82385 4418/4418 [==============================] - 91s 21ms/step - loss: 0.7152 - accuracy: 0.8203 - val_loss: 0.4387 - val_accuracy: 0.8238 Epoch 24/200 4418/4418 [==============================] - ETA: 0s - loss: 0.7133 - accuracy: 0.8208 Epoch 00024: val_accuracy did not improve from 0.82385 Epoch 00024: val_accuracy did not improve from 0.82385 4418/4418 [==============================] - 96s 22ms/step - loss: 0.7133 - accuracy: 0.8208 - val_loss: 0.4476 - val_accuracy: 0.8203 Epoch 25/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.7122 - accuracy: 0.8221 Epoch 00025: val_accuracy did not improve from 0.82385 Epoch 00025: val_accuracy did not improve from 0.82385 4418/4418 [==============================] - 92s 21ms/step - loss: 0.7122 - accuracy: 0.8221 - val_loss: 0.4301 - val_accuracy: 0.8223 Epoch 26/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.7089 - accuracy: 0.8231 Epoch 00026: val_accuracy improved from 0.82385 to 0.82570, saving model to classifier_weights2-improvement-26-0.83.hdf5 Epoch 00026: val_accuracy did not improve from 0.82570 4418/4418 [==============================] - 91s 21ms/step - loss: 0.7089 - accuracy: 0.8231 - val_loss: 0.4222 - val_accuracy: 0.8257 Epoch 27/200 4418/4418 [==============================] - ETA: 0s - loss: 0.7094 - accuracy: 0.8213 Epoch 00027: val_accuracy did not improve from 0.82570 Epoch 00027: val_accuracy did not improve from 0.82570 4418/4418 [==============================] - 91s 21ms/step - loss: 0.7094 - accuracy: 0.8213 - val_loss: 0.4425 - val_accuracy: 0.8158 Epoch 28/200 4418/4418 [==============================] - ETA: 0s - loss: 0.7062 - accuracy: 0.8222 Epoch 00028: val_accuracy did not improve from 0.82570 Epoch 00028: val_accuracy did not improve from 0.82570 4418/4418 [==============================] - 93s 21ms/step - loss: 0.7062 - accuracy: 0.8222 - val_loss: 0.4274 - val_accuracy: 0.8257 Epoch 29/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.7058 - accuracy: 0.8225 Epoch 00029: val_accuracy did not improve from 0.82570 Epoch 00029: val_accuracy did not improve from 0.82570 4418/4418 [==============================] - 90s 20ms/step - loss: 0.7058 - accuracy: 0.8225 - val_loss: 0.4374 - val_accuracy: 0.8187 Epoch 30/200 4418/4418 [==============================] - ETA: 0s - loss: 0.7044 - accuracy: 0.8224 Epoch 00030: val_accuracy did not improve from 0.82570 Epoch 00030: val_accuracy did not improve from 0.82570 4418/4418 [==============================] - 90s 20ms/step - loss: 0.7044 - accuracy: 0.8224 - val_loss: 0.4268 - val_accuracy: 0.8248 Epoch 31/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.7017 - accuracy: 0.8249 Epoch 00031: val_accuracy improved from 0.82570 to 0.82720, saving model to classifier_weights2-improvement-31-0.83.hdf5 Epoch 00031: val_accuracy did not improve from 0.82720 4418/4418 [==============================] - 93s 21ms/step - loss: 0.7016 - accuracy: 0.8249 - val_loss: 0.4211 - val_accuracy: 0.8272 Epoch 32/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6994 - accuracy: 0.8242 Epoch 00032: val_accuracy did not improve from 0.82720 Epoch 00032: val_accuracy did not improve from 0.82720 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6994 - accuracy: 0.8242 - val_loss: 0.4412 - val_accuracy: 0.8167 Epoch 33/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6996 - accuracy: 0.8228 Epoch 00033: val_accuracy did not improve from 0.82720 Epoch 00033: val_accuracy did not improve from 0.82720 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6996 - accuracy: 0.8228 - val_loss: 0.4265 - val_accuracy: 0.8225 Epoch 34/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6986 - accuracy: 0.8244 Epoch 00034: val_accuracy did not improve from 0.82720 Epoch 00034: val_accuracy did not improve from 0.82720 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6986 - accuracy: 0.8244 - val_loss: 0.4371 - val_accuracy: 0.8162 Epoch 35/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6970 - accuracy: 0.8235 Epoch 00035: val_accuracy improved from 0.82720 to 0.83614, saving model to classifier_weights2-improvement-35-0.84.hdf5 Epoch 00035: val_accuracy did not improve from 0.83614 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6970 - accuracy: 0.8235 - val_loss: 0.4109 - val_accuracy: 0.8361 Epoch 36/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6961 - accuracy: 0.8236 Epoch 00036: val_accuracy did not improve from 0.83614 Epoch 00036: val_accuracy did not improve from 0.83614 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6961 - accuracy: 0.8236 - val_loss: 0.4190 - val_accuracy: 0.8298 Epoch 37/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6948 - accuracy: 0.8239 Epoch 00037: val_accuracy did not improve from 0.83614 Epoch 00037: val_accuracy did not improve from 0.83614 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6947 - accuracy: 0.8239 - val_loss: 0.4279 - val_accuracy: 0.8200 Epoch 38/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6929 - accuracy: 0.8248 Epoch 00038: val_accuracy did not improve from 0.83614 Epoch 00038: val_accuracy did not improve from 0.83614 4418/4418 [==============================] - 93s 21ms/step - loss: 0.6929 - accuracy: 0.8248 - val_loss: 0.4087 - val_accuracy: 0.8290 Epoch 39/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6919 - accuracy: 0.8243 Epoch 00039: val_accuracy did not improve from 0.83614 Epoch 00039: val_accuracy did not improve from 0.83614 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6919 - accuracy: 0.8243 - val_loss: 0.4278 - val_accuracy: 0.8223 Epoch 40/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6887 - accuracy: 0.8244 Epoch 00040: val_accuracy did not improve from 0.83614 Epoch 00040: val_accuracy did not improve from 0.83614 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6887 - accuracy: 0.8244 - val_loss: 0.4057 - val_accuracy: 0.8279 Epoch 41/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6887 - accuracy: 0.8248 Epoch 00041: val_accuracy did not improve from 0.83614 Epoch 00041: val_accuracy did not improve from 0.83614 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6887 - accuracy: 0.8248 - val_loss: 0.4429 - val_accuracy: 0.8188 Epoch 42/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6875 - accuracy: 0.8257 Epoch 00042: val_accuracy improved from 0.83614 to 0.83851, saving model to classifier_weights2-improvement-42-0.84.hdf5 Epoch 00042: val_accuracy did not improve from 0.83851 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6875 - accuracy: 0.8257 - val_loss: 0.4032 - val_accuracy: 0.8385 Epoch 43/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6856 - accuracy: 0.8250 Epoch 00043: val_accuracy did not improve from 0.83851 Epoch 00043: val_accuracy did not improve from 0.83851 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6856 - accuracy: 0.8250 - val_loss: 0.4211 - val_accuracy: 0.8292 Epoch 44/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6848 - accuracy: 0.8254 Epoch 00044: val_accuracy did not improve from 0.83851 Epoch 00044: val_accuracy did not improve from 0.83851 4418/4418 [==============================] - 91s 21ms/step - loss: 0.6848 - accuracy: 0.8254 - val_loss: 0.4338 - val_accuracy: 0.8118 Epoch 45/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6830 - accuracy: 0.8260 Epoch 00045: val_accuracy did not improve from 0.83851 Epoch 00045: val_accuracy did not improve from 0.83851 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6830 - accuracy: 0.8260 - val_loss: 0.4360 - val_accuracy: 0.8181 Epoch 46/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6836 - accuracy: 0.8251 Epoch 00046: val_accuracy did not improve from 0.83851 Epoch 00046: val_accuracy did not improve from 0.83851 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6837 - accuracy: 0.8251 - val_loss: 0.4258 - val_accuracy: 0.8206 Epoch 47/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6817 - accuracy: 0.8260 Epoch 00047: val_accuracy did not improve from 0.83851 Epoch 00047: val_accuracy did not improve from 0.83851 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6817 - accuracy: 0.8260 - val_loss: 0.4144 - val_accuracy: 0.8268 Epoch 48/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6802 - accuracy: 0.8273 Epoch 00048: val_accuracy did not improve from 0.83851 Epoch 00048: val_accuracy did not improve from 0.83851 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6802 - accuracy: 0.8273 - val_loss: 0.4184 - val_accuracy: 0.8236 Epoch 49/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6792 - accuracy: 0.8269 Epoch 00049: val_accuracy did not improve from 0.83851 Epoch 00049: val_accuracy did not improve from 0.83851 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6792 - accuracy: 0.8269 - val_loss: 0.4236 - val_accuracy: 0.8174 Epoch 50/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6772 - accuracy: 0.8279 Epoch 00050: val_accuracy did not improve from 0.83851 Epoch 00050: val_accuracy did not improve from 0.83851 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6772 - accuracy: 0.8279 - val_loss: 0.4175 - val_accuracy: 0.8240 Epoch 51/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6770 - accuracy: 0.8283 Epoch 00051: val_accuracy did not improve from 0.83851 Epoch 00051: val_accuracy did not improve from 0.83851 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6770 - accuracy: 0.8283 - val_loss: 0.4426 - val_accuracy: 0.8136 Epoch 52/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6772 - accuracy: 0.8273 Epoch 00052: val_accuracy did not improve from 0.83851 Epoch 00052: val_accuracy did not improve from 0.83851 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6772 - accuracy: 0.8273 - val_loss: 0.4204 - val_accuracy: 0.8353 Epoch 53/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6756 - accuracy: 0.8281 Epoch 00053: val_accuracy did not improve from 0.83851 Epoch 00053: val_accuracy did not improve from 0.83851 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6756 - accuracy: 0.8281 - val_loss: 0.4271 - val_accuracy: 0.8215 Epoch 54/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6749 - accuracy: 0.8286 Epoch 00054: val_accuracy did not improve from 0.83851 Epoch 00054: val_accuracy did not improve from 0.83851 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6749 - accuracy: 0.8286 - val_loss: 0.4306 - val_accuracy: 0.8179 Epoch 55/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6743 - accuracy: 0.8294 Epoch 00055: val_accuracy did not improve from 0.83851 Epoch 00055: val_accuracy did not improve from 0.83851 4418/4418 [==============================] - 87s 20ms/step - loss: 0.6743 - accuracy: 0.8294 - val_loss: 0.4061 - val_accuracy: 0.8287 Epoch 56/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6713 - accuracy: 0.8293 Epoch 00056: val_accuracy did not improve from 0.83851 Epoch 00056: val_accuracy did not improve from 0.83851 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6713 - accuracy: 0.8293 - val_loss: 0.3953 - val_accuracy: 0.8326 Epoch 57/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6714 - accuracy: 0.8291 Epoch 00057: val_accuracy did not improve from 0.83851 Epoch 00057: val_accuracy did not improve from 0.83851 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6714 - accuracy: 0.8291 - val_loss: 0.4076 - val_accuracy: 0.8293 Epoch 58/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6725 - accuracy: 0.8294 Epoch 00058: val_accuracy did not improve from 0.83851 Epoch 00058: val_accuracy did not improve from 0.83851 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6725 - accuracy: 0.8294 - val_loss: 0.4043 - val_accuracy: 0.8316 Epoch 59/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6699 - accuracy: 0.8290 Epoch 00059: val_accuracy did not improve from 0.83851 Epoch 00059: val_accuracy did not improve from 0.83851 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6699 - accuracy: 0.8290 - val_loss: 0.4302 - val_accuracy: 0.8239 Epoch 60/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6706 - accuracy: 0.8290 Epoch 00060: val_accuracy did not improve from 0.83851 Epoch 00060: val_accuracy did not improve from 0.83851 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6706 - accuracy: 0.8290 - val_loss: 0.4042 - val_accuracy: 0.8289 Epoch 61/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6694 - accuracy: 0.8299 Epoch 00061: val_accuracy did not improve from 0.83851 Epoch 00061: val_accuracy did not improve from 0.83851 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6694 - accuracy: 0.8299 - val_loss: 0.4129 - val_accuracy: 0.8218 Epoch 62/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6672 - accuracy: 0.8291 Epoch 00062: val_accuracy did not improve from 0.83851 Epoch 00062: val_accuracy did not improve from 0.83851 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6672 - accuracy: 0.8291 - val_loss: 0.4064 - val_accuracy: 0.8276 Epoch 63/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6678 - accuracy: 0.8291 Epoch 00063: val_accuracy did not improve from 0.83851 Epoch 00063: val_accuracy did not improve from 0.83851 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6678 - accuracy: 0.8291 - val_loss: 0.4141 - val_accuracy: 0.8276 Epoch 64/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6662 - accuracy: 0.8302 Epoch 00064: val_accuracy did not improve from 0.83851 Epoch 00064: val_accuracy did not improve from 0.83851 4418/4418 [==============================] - 91s 21ms/step - loss: 0.6662 - accuracy: 0.8302 - val_loss: 0.4018 - val_accuracy: 0.8343 Epoch 65/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6658 - accuracy: 0.8296 Epoch 00065: val_accuracy did not improve from 0.83851 Epoch 00065: val_accuracy did not improve from 0.83851 4418/4418 [==============================] - 91s 21ms/step - loss: 0.6658 - accuracy: 0.8296 - val_loss: 0.4059 - val_accuracy: 0.8296 Epoch 66/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6660 - accuracy: 0.8308 Epoch 00066: val_accuracy did not improve from 0.83851 Epoch 00066: val_accuracy did not improve from 0.83851 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6660 - accuracy: 0.8308 - val_loss: 0.4154 - val_accuracy: 0.8300 Epoch 67/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6646 - accuracy: 0.8303 Epoch 00067: val_accuracy did not improve from 0.83851 Epoch 00067: val_accuracy did not improve from 0.83851 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6647 - accuracy: 0.8303 - val_loss: 0.4123 - val_accuracy: 0.8306 Epoch 68/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6642 - accuracy: 0.8297 Epoch 00068: val_accuracy did not improve from 0.83851 Epoch 00068: val_accuracy did not improve from 0.83851 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6642 - accuracy: 0.8297 - val_loss: 0.4243 - val_accuracy: 0.8229 Epoch 69/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6637 - accuracy: 0.8293 Epoch 00069: val_accuracy improved from 0.83851 to 0.83934, saving model to classifier_weights2-improvement-69-0.84.hdf5 Epoch 00069: val_accuracy did not improve from 0.83934 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6637 - accuracy: 0.8293 - val_loss: 0.4002 - val_accuracy: 0.8393 Epoch 70/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6635 - accuracy: 0.8302 Epoch 00070: val_accuracy did not improve from 0.83934 Epoch 00070: val_accuracy did not improve from 0.83934 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6635 - accuracy: 0.8302 - val_loss: 0.4440 - val_accuracy: 0.8170 Epoch 71/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6638 - accuracy: 0.8305 Epoch 00071: val_accuracy did not improve from 0.83934 Epoch 00071: val_accuracy did not improve from 0.83934 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6638 - accuracy: 0.8305 - val_loss: 0.4385 - val_accuracy: 0.8223 Epoch 72/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6625 - accuracy: 0.8308 Epoch 00072: val_accuracy did not improve from 0.83934 Epoch 00072: val_accuracy did not improve from 0.83934 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6625 - accuracy: 0.8308 - val_loss: 0.4020 - val_accuracy: 0.8372 Epoch 73/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6610 - accuracy: 0.8300 Epoch 00073: val_accuracy did not improve from 0.83934 Epoch 00073: val_accuracy did not improve from 0.83934 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6610 - accuracy: 0.8300 - val_loss: 0.4192 - val_accuracy: 0.8242 Epoch 74/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6620 - accuracy: 0.8290 Epoch 00074: val_accuracy did not improve from 0.83934 Epoch 00074: val_accuracy did not improve from 0.83934 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6620 - accuracy: 0.8290 - val_loss: 0.4140 - val_accuracy: 0.8226 Epoch 75/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6613 - accuracy: 0.8281 Epoch 00075: val_accuracy did not improve from 0.83934 Epoch 00075: val_accuracy did not improve from 0.83934 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6613 - accuracy: 0.8281 - val_loss: 0.4012 - val_accuracy: 0.8354 Epoch 76/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6596 - accuracy: 0.8306 Epoch 00076: val_accuracy did not improve from 0.83934 Epoch 00076: val_accuracy did not improve from 0.83934 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6596 - accuracy: 0.8306 - val_loss: 0.4068 - val_accuracy: 0.8265 Epoch 77/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6610 - accuracy: 0.8292 Epoch 00077: val_accuracy did not improve from 0.83934 Epoch 00077: val_accuracy did not improve from 0.83934 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6610 - accuracy: 0.8292 - val_loss: 0.4488 - val_accuracy: 0.8171 Epoch 78/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6591 - accuracy: 0.8318 Epoch 00078: val_accuracy did not improve from 0.83934 Epoch 00078: val_accuracy did not improve from 0.83934 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6591 - accuracy: 0.8318 - val_loss: 0.4243 - val_accuracy: 0.8215 Epoch 79/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6591 - accuracy: 0.8297 Epoch 00079: val_accuracy did not improve from 0.83934 Epoch 00079: val_accuracy did not improve from 0.83934 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6591 - accuracy: 0.8297 - val_loss: 0.4175 - val_accuracy: 0.8261 Epoch 80/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6587 - accuracy: 0.8294 Epoch 00080: val_accuracy did not improve from 0.83934 Epoch 00080: val_accuracy did not improve from 0.83934 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6587 - accuracy: 0.8294 - val_loss: 0.4109 - val_accuracy: 0.8247 Epoch 81/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6593 - accuracy: 0.8292 Epoch 00081: val_accuracy did not improve from 0.83934 Epoch 00081: val_accuracy did not improve from 0.83934 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6593 - accuracy: 0.8292 - val_loss: 0.4011 - val_accuracy: 0.8279 Epoch 82/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6570 - accuracy: 0.8305 Epoch 00082: val_accuracy did not improve from 0.83934 Epoch 00082: val_accuracy did not improve from 0.83934 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6570 - accuracy: 0.8305 - val_loss: 0.4349 - val_accuracy: 0.8269 Epoch 83/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6573 - accuracy: 0.8315 Epoch 00083: val_accuracy did not improve from 0.83934 Epoch 00083: val_accuracy did not improve from 0.83934 4418/4418 [==============================] - 87s 20ms/step - loss: 0.6573 - accuracy: 0.8315 - val_loss: 0.3926 - val_accuracy: 0.8336 Epoch 84/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6560 - accuracy: 0.8300 Epoch 00084: val_accuracy did not improve from 0.83934 Epoch 00084: val_accuracy did not improve from 0.83934 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6560 - accuracy: 0.8300 - val_loss: 0.4181 - val_accuracy: 0.8258 Epoch 85/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6544 - accuracy: 0.8305 Epoch 00085: val_accuracy did not improve from 0.83934 Epoch 00085: val_accuracy did not improve from 0.83934 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6544 - accuracy: 0.8305 - val_loss: 0.4146 - val_accuracy: 0.8294 Epoch 86/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6550 - accuracy: 0.8314 Epoch 00086: val_accuracy did not improve from 0.83934 Epoch 00086: val_accuracy did not improve from 0.83934 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6549 - accuracy: 0.8314 - val_loss: 0.4198 - val_accuracy: 0.8211 Epoch 87/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6539 - accuracy: 0.8322 Epoch 00087: val_accuracy did not improve from 0.83934 Epoch 00087: val_accuracy did not improve from 0.83934 4418/4418 [==============================] - 87s 20ms/step - loss: 0.6539 - accuracy: 0.8322 - val_loss: 0.4175 - val_accuracy: 0.8256 Epoch 88/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6550 - accuracy: 0.8305 Epoch 00088: val_accuracy did not improve from 0.83934 Epoch 00088: val_accuracy did not improve from 0.83934 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6550 - accuracy: 0.8305 - val_loss: 0.4165 - val_accuracy: 0.8260 Epoch 89/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6543 - accuracy: 0.8310 Epoch 00089: val_accuracy did not improve from 0.83934 Epoch 00089: val_accuracy did not improve from 0.83934 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6543 - accuracy: 0.8310 - val_loss: 0.4248 - val_accuracy: 0.8249 Epoch 90/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6544 - accuracy: 0.8316 Epoch 00090: val_accuracy did not improve from 0.83934 Epoch 00090: val_accuracy did not improve from 0.83934 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6544 - accuracy: 0.8316 - val_loss: 0.4047 - val_accuracy: 0.8280 Epoch 91/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6525 - accuracy: 0.8319 Epoch 00091: val_accuracy did not improve from 0.83934 Epoch 00091: val_accuracy did not improve from 0.83934 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6525 - accuracy: 0.8319 - val_loss: 0.4221 - val_accuracy: 0.8211 Epoch 92/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6555 - accuracy: 0.8318 Epoch 00092: val_accuracy did not improve from 0.83934 Epoch 00092: val_accuracy did not improve from 0.83934 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6554 - accuracy: 0.8318 - val_loss: 0.4002 - val_accuracy: 0.8327 Epoch 93/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6519 - accuracy: 0.8318 Epoch 00093: val_accuracy did not improve from 0.83934 Epoch 00093: val_accuracy did not improve from 0.83934 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6519 - accuracy: 0.8318 - val_loss: 0.4018 - val_accuracy: 0.8275 Epoch 94/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6513 - accuracy: 0.8318 Epoch 00094: val_accuracy improved from 0.83934 to 0.84072, saving model to classifier_weights2-improvement-94-0.84.hdf5 Epoch 00094: val_accuracy did not improve from 0.84072 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6513 - accuracy: 0.8318 - val_loss: 0.3906 - val_accuracy: 0.8407 Epoch 95/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6518 - accuracy: 0.8316 Epoch 00095: val_accuracy did not improve from 0.84072 Epoch 00095: val_accuracy did not improve from 0.84072 4418/4418 [==============================] - 87s 20ms/step - loss: 0.6518 - accuracy: 0.8316 - val_loss: 0.4107 - val_accuracy: 0.8258 Epoch 96/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6504 - accuracy: 0.8312 Epoch 00096: val_accuracy did not improve from 0.84072 Epoch 00096: val_accuracy did not improve from 0.84072 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6504 - accuracy: 0.8312 - val_loss: 0.3982 - val_accuracy: 0.8349 Epoch 97/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6504 - accuracy: 0.8318 Epoch 00097: val_accuracy did not improve from 0.84072 Epoch 00097: val_accuracy did not improve from 0.84072 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6504 - accuracy: 0.8318 - val_loss: 0.4091 - val_accuracy: 0.8281 Epoch 98/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6489 - accuracy: 0.8310 Epoch 00098: val_accuracy did not improve from 0.84072 Epoch 00098: val_accuracy did not improve from 0.84072 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6489 - accuracy: 0.8310 - val_loss: 0.3783 - val_accuracy: 0.8389 Epoch 99/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6486 - accuracy: 0.8321 Epoch 00099: val_accuracy did not improve from 0.84072 Epoch 00099: val_accuracy did not improve from 0.84072 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6486 - accuracy: 0.8321 - val_loss: 0.3980 - val_accuracy: 0.8279 Epoch 100/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6488 - accuracy: 0.8319 Epoch 00100: val_accuracy did not improve from 0.84072 Epoch 00100: val_accuracy did not improve from 0.84072 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6488 - accuracy: 0.8319 - val_loss: 0.4063 - val_accuracy: 0.8300 Epoch 101/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6475 - accuracy: 0.8314 Epoch 00101: val_accuracy did not improve from 0.84072 Epoch 00101: val_accuracy did not improve from 0.84072 4418/4418 [==============================] - 87s 20ms/step - loss: 0.6475 - accuracy: 0.8314 - val_loss: 0.3950 - val_accuracy: 0.8352 Epoch 102/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6484 - accuracy: 0.8315 Epoch 00102: val_accuracy did not improve from 0.84072 Epoch 00102: val_accuracy did not improve from 0.84072 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6484 - accuracy: 0.8315 - val_loss: 0.4108 - val_accuracy: 0.8294 Epoch 103/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6491 - accuracy: 0.8333 Epoch 00103: val_accuracy did not improve from 0.84072 Epoch 00103: val_accuracy did not improve from 0.84072 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6491 - accuracy: 0.8333 - val_loss: 0.4283 - val_accuracy: 0.8248 Epoch 104/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6466 - accuracy: 0.8328 Epoch 00104: val_accuracy did not improve from 0.84072 Epoch 00104: val_accuracy did not improve from 0.84072 4418/4418 [==============================] - 87s 20ms/step - loss: 0.6466 - accuracy: 0.8328 - val_loss: 0.4319 - val_accuracy: 0.8187 Epoch 105/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6484 - accuracy: 0.8323 Epoch 00105: val_accuracy did not improve from 0.84072 Epoch 00105: val_accuracy did not improve from 0.84072 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6484 - accuracy: 0.8323 - val_loss: 0.4338 - val_accuracy: 0.8339 Epoch 106/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6457 - accuracy: 0.8320 Epoch 00106: val_accuracy did not improve from 0.84072 Epoch 00106: val_accuracy did not improve from 0.84072 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6457 - accuracy: 0.8320 - val_loss: 0.4070 - val_accuracy: 0.8315 Epoch 107/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6466 - accuracy: 0.8340 Epoch 00107: val_accuracy did not improve from 0.84072 Epoch 00107: val_accuracy did not improve from 0.84072 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6466 - accuracy: 0.8340 - val_loss: 0.4105 - val_accuracy: 0.8319 Epoch 108/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6456 - accuracy: 0.8331 Epoch 00108: val_accuracy did not improve from 0.84072 Epoch 00108: val_accuracy did not improve from 0.84072 4418/4418 [==============================] - 87s 20ms/step - loss: 0.6456 - accuracy: 0.8331 - val_loss: 0.4021 - val_accuracy: 0.8291 Epoch 109/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6448 - accuracy: 0.8316 Epoch 00109: val_accuracy did not improve from 0.84072 Epoch 00109: val_accuracy did not improve from 0.84072 4418/4418 [==============================] - 87s 20ms/step - loss: 0.6448 - accuracy: 0.8316 - val_loss: 0.3888 - val_accuracy: 0.8335 Epoch 110/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6442 - accuracy: 0.8319 Epoch 00110: val_accuracy did not improve from 0.84072 Epoch 00110: val_accuracy did not improve from 0.84072 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6442 - accuracy: 0.8319 - val_loss: 0.3990 - val_accuracy: 0.8302 Epoch 111/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6449 - accuracy: 0.8315 Epoch 00111: val_accuracy did not improve from 0.84072 Epoch 00111: val_accuracy did not improve from 0.84072 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6449 - accuracy: 0.8315 - val_loss: 0.4081 - val_accuracy: 0.8290 Epoch 112/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6453 - accuracy: 0.8319 Epoch 00112: val_accuracy did not improve from 0.84072 Epoch 00112: val_accuracy did not improve from 0.84072 4418/4418 [==============================] - 87s 20ms/step - loss: 0.6453 - accuracy: 0.8319 - val_loss: 0.4034 - val_accuracy: 0.8348 Epoch 113/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6436 - accuracy: 0.8326 Epoch 00113: val_accuracy improved from 0.84072 to 0.84206, saving model to classifier_weights2-improvement-113-0.84.hdf5 Epoch 00113: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6436 - accuracy: 0.8326 - val_loss: 0.4085 - val_accuracy: 0.8421 Epoch 114/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6431 - accuracy: 0.8335 Epoch 00114: val_accuracy did not improve from 0.84206 Epoch 00114: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6430 - accuracy: 0.8334 - val_loss: 0.4307 - val_accuracy: 0.8278 Epoch 115/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6423 - accuracy: 0.8329 Epoch 00115: val_accuracy did not improve from 0.84206 Epoch 00115: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6423 - accuracy: 0.8329 - val_loss: 0.4052 - val_accuracy: 0.8355 Epoch 116/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6426 - accuracy: 0.8318 Epoch 00116: val_accuracy did not improve from 0.84206 Epoch 00116: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6426 - accuracy: 0.8318 - val_loss: 0.4126 - val_accuracy: 0.8314 Epoch 117/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6427 - accuracy: 0.8331 Epoch 00117: val_accuracy did not improve from 0.84206 Epoch 00117: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6428 - accuracy: 0.8331 - val_loss: 0.4099 - val_accuracy: 0.8301 Epoch 118/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6424 - accuracy: 0.8329 Epoch 00118: val_accuracy did not improve from 0.84206 Epoch 00118: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6423 - accuracy: 0.8329 - val_loss: 0.4081 - val_accuracy: 0.8299 Epoch 119/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6425 - accuracy: 0.8333 Epoch 00119: val_accuracy did not improve from 0.84206 Epoch 00119: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6425 - accuracy: 0.8333 - val_loss: 0.4091 - val_accuracy: 0.8282 Epoch 120/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6410 - accuracy: 0.8337 Epoch 00120: val_accuracy did not improve from 0.84206 Epoch 00120: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 91s 21ms/step - loss: 0.6411 - accuracy: 0.8337 - val_loss: 0.4345 - val_accuracy: 0.8231 Epoch 121/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6425 - accuracy: 0.8333 Epoch 00121: val_accuracy did not improve from 0.84206 Epoch 00121: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6425 - accuracy: 0.8333 - val_loss: 0.3969 - val_accuracy: 0.8344 Epoch 122/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6414 - accuracy: 0.8329 Epoch 00122: val_accuracy did not improve from 0.84206 Epoch 00122: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6414 - accuracy: 0.8329 - val_loss: 0.4022 - val_accuracy: 0.8352 Epoch 123/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6410 - accuracy: 0.8329 Epoch 00123: val_accuracy did not improve from 0.84206 Epoch 00123: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6410 - accuracy: 0.8329 - val_loss: 0.3956 - val_accuracy: 0.8355 Epoch 124/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6401 - accuracy: 0.8337 Epoch 00124: val_accuracy did not improve from 0.84206 Epoch 00124: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6401 - accuracy: 0.8337 - val_loss: 0.4229 - val_accuracy: 0.8249 Epoch 125/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6393 - accuracy: 0.8337 Epoch 00125: val_accuracy did not improve from 0.84206 Epoch 00125: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6393 - accuracy: 0.8337 - val_loss: 0.4066 - val_accuracy: 0.8295 Epoch 126/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6385 - accuracy: 0.8334 Epoch 00126: val_accuracy did not improve from 0.84206 Epoch 00126: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6385 - accuracy: 0.8334 - val_loss: 0.4069 - val_accuracy: 0.8319 Epoch 127/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6385 - accuracy: 0.8341 Epoch 00127: val_accuracy did not improve from 0.84206 Epoch 00127: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6385 - accuracy: 0.8341 - val_loss: 0.4034 - val_accuracy: 0.8292 Epoch 128/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6420 - accuracy: 0.8326 Epoch 00128: val_accuracy did not improve from 0.84206 Epoch 00128: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6421 - accuracy: 0.8326 - val_loss: 0.4038 - val_accuracy: 0.8346 Epoch 129/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6392 - accuracy: 0.8335 Epoch 00129: val_accuracy did not improve from 0.84206 Epoch 00129: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6392 - accuracy: 0.8335 - val_loss: 0.4001 - val_accuracy: 0.8337 Epoch 130/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6384 - accuracy: 0.8343 Epoch 00130: val_accuracy did not improve from 0.84206 Epoch 00130: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6384 - accuracy: 0.8343 - val_loss: 0.3939 - val_accuracy: 0.8380 Epoch 131/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6379 - accuracy: 0.8329 Epoch 00131: val_accuracy did not improve from 0.84206 Epoch 00131: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6379 - accuracy: 0.8329 - val_loss: 0.4156 - val_accuracy: 0.8278 Epoch 132/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6391 - accuracy: 0.8318 Epoch 00132: val_accuracy did not improve from 0.84206 Epoch 00132: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6391 - accuracy: 0.8318 - val_loss: 0.4120 - val_accuracy: 0.8245 Epoch 133/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6396 - accuracy: 0.8327 Epoch 00133: val_accuracy did not improve from 0.84206 Epoch 00133: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6396 - accuracy: 0.8327 - val_loss: 0.4121 - val_accuracy: 0.8246 Epoch 134/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6383 - accuracy: 0.8336 Epoch 00134: val_accuracy did not improve from 0.84206 Epoch 00134: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6382 - accuracy: 0.8336 - val_loss: 0.4151 - val_accuracy: 0.8307 Epoch 135/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6374 - accuracy: 0.8336 Epoch 00135: val_accuracy did not improve from 0.84206 Epoch 00135: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6374 - accuracy: 0.8336 - val_loss: 0.3973 - val_accuracy: 0.8337 Epoch 136/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6366 - accuracy: 0.8334 Epoch 00136: val_accuracy did not improve from 0.84206 Epoch 00136: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6366 - accuracy: 0.8334 - val_loss: 0.3977 - val_accuracy: 0.8313 Epoch 137/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6358 - accuracy: 0.8332 Epoch 00137: val_accuracy did not improve from 0.84206 Epoch 00137: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6359 - accuracy: 0.8332 - val_loss: 0.4163 - val_accuracy: 0.8269 Epoch 138/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6346 - accuracy: 0.8341 Epoch 00138: val_accuracy did not improve from 0.84206 Epoch 00138: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6346 - accuracy: 0.8341 - val_loss: 0.4398 - val_accuracy: 0.8233 Epoch 139/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6362 - accuracy: 0.8343 Epoch 00139: val_accuracy did not improve from 0.84206 Epoch 00139: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6362 - accuracy: 0.8343 - val_loss: 0.3991 - val_accuracy: 0.8330 Epoch 140/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6352 - accuracy: 0.8342 Epoch 00140: val_accuracy did not improve from 0.84206 Epoch 00140: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6352 - accuracy: 0.8342 - val_loss: 0.4091 - val_accuracy: 0.8315 Epoch 141/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6365 - accuracy: 0.8335 Epoch 00141: val_accuracy did not improve from 0.84206 Epoch 00141: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 91s 21ms/step - loss: 0.6365 - accuracy: 0.8335 - val_loss: 0.4268 - val_accuracy: 0.8195 Epoch 142/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6349 - accuracy: 0.8330 Epoch 00142: val_accuracy did not improve from 0.84206 Epoch 00142: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6349 - accuracy: 0.8330 - val_loss: 0.4061 - val_accuracy: 0.8336 Epoch 143/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6358 - accuracy: 0.8338 Epoch 00143: val_accuracy did not improve from 0.84206 Epoch 00143: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6358 - accuracy: 0.8338 - val_loss: 0.4094 - val_accuracy: 0.8246 Epoch 144/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6343 - accuracy: 0.8324 Epoch 00144: val_accuracy did not improve from 0.84206 Epoch 00144: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6343 - accuracy: 0.8324 - val_loss: 0.3982 - val_accuracy: 0.8297 Epoch 145/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6342 - accuracy: 0.8333 Epoch 00145: val_accuracy did not improve from 0.84206 Epoch 00145: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6342 - accuracy: 0.8333 - val_loss: 0.4118 - val_accuracy: 0.8356 Epoch 146/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6322 - accuracy: 0.8342 Epoch 00146: val_accuracy did not improve from 0.84206 Epoch 00146: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6322 - accuracy: 0.8342 - val_loss: 0.3949 - val_accuracy: 0.8355 Epoch 147/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6339 - accuracy: 0.8337 Epoch 00147: val_accuracy did not improve from 0.84206 Epoch 00147: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6339 - accuracy: 0.8337 - val_loss: 0.4012 - val_accuracy: 0.8316 Epoch 148/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6332 - accuracy: 0.8339 Epoch 00148: val_accuracy did not improve from 0.84206 Epoch 00148: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6332 - accuracy: 0.8339 - val_loss: 0.4049 - val_accuracy: 0.8331 Epoch 149/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6333 - accuracy: 0.8340 Epoch 00149: val_accuracy did not improve from 0.84206 Epoch 00149: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6333 - accuracy: 0.8340 - val_loss: 0.4300 - val_accuracy: 0.8176 Epoch 150/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6338 - accuracy: 0.8336 Epoch 00150: val_accuracy did not improve from 0.84206 Epoch 00150: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6338 - accuracy: 0.8336 - val_loss: 0.4131 - val_accuracy: 0.8282 Epoch 151/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6326 - accuracy: 0.8330 Epoch 00151: val_accuracy did not improve from 0.84206 Epoch 00151: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6326 - accuracy: 0.8330 - val_loss: 0.3834 - val_accuracy: 0.8375 Epoch 152/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6324 - accuracy: 0.8348 Epoch 00152: val_accuracy did not improve from 0.84206 Epoch 00152: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6324 - accuracy: 0.8349 - val_loss: 0.3945 - val_accuracy: 0.8353 Epoch 153/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6323 - accuracy: 0.8337 Epoch 00153: val_accuracy did not improve from 0.84206 Epoch 00153: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6323 - accuracy: 0.8337 - val_loss: 0.3882 - val_accuracy: 0.8377 Epoch 154/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6346 - accuracy: 0.8335 Epoch 00154: val_accuracy did not improve from 0.84206 Epoch 00154: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6345 - accuracy: 0.8336 - val_loss: 0.3931 - val_accuracy: 0.8340 Epoch 155/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6332 - accuracy: 0.8342 Epoch 00155: val_accuracy did not improve from 0.84206 Epoch 00155: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6333 - accuracy: 0.8342 - val_loss: 0.4102 - val_accuracy: 0.8275 Epoch 156/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6318 - accuracy: 0.8342 Epoch 00156: val_accuracy did not improve from 0.84206 Epoch 00156: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6318 - accuracy: 0.8342 - val_loss: 0.4147 - val_accuracy: 0.8304 Epoch 157/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6317 - accuracy: 0.8338 Epoch 00157: val_accuracy did not improve from 0.84206 Epoch 00157: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6317 - accuracy: 0.8338 - val_loss: 0.4074 - val_accuracy: 0.8311 Epoch 158/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6324 - accuracy: 0.8347 Epoch 00158: val_accuracy did not improve from 0.84206 Epoch 00158: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6324 - accuracy: 0.8347 - val_loss: 0.4078 - val_accuracy: 0.8322 Epoch 159/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6295 - accuracy: 0.8345 Epoch 00159: val_accuracy did not improve from 0.84206 Epoch 00159: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6295 - accuracy: 0.8345 - val_loss: 0.3967 - val_accuracy: 0.8353 Epoch 160/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6310 - accuracy: 0.8341 Epoch 00160: val_accuracy did not improve from 0.84206 Epoch 00160: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6310 - accuracy: 0.8341 - val_loss: 0.4299 - val_accuracy: 0.8265 Epoch 161/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6295 - accuracy: 0.8342 Epoch 00161: val_accuracy did not improve from 0.84206 Epoch 00161: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 92s 21ms/step - loss: 0.6295 - accuracy: 0.8342 - val_loss: 0.3974 - val_accuracy: 0.8326 Epoch 162/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6293 - accuracy: 0.8352 Epoch 00162: val_accuracy did not improve from 0.84206 Epoch 00162: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6293 - accuracy: 0.8352 - val_loss: 0.4082 - val_accuracy: 0.8274 Epoch 163/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6289 - accuracy: 0.8352 Epoch 00163: val_accuracy did not improve from 0.84206 Epoch 00163: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6289 - accuracy: 0.8352 - val_loss: 0.4052 - val_accuracy: 0.8339 Epoch 164/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6285 - accuracy: 0.8348 Epoch 00164: val_accuracy did not improve from 0.84206 Epoch 00164: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6285 - accuracy: 0.8348 - val_loss: 0.4084 - val_accuracy: 0.8325 Epoch 165/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6283 - accuracy: 0.8343 Epoch 00165: val_accuracy did not improve from 0.84206 Epoch 00165: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6283 - accuracy: 0.8343 - val_loss: 0.4223 - val_accuracy: 0.8211 Epoch 166/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6288 - accuracy: 0.8343 Epoch 00166: val_accuracy did not improve from 0.84206 Epoch 00166: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6288 - accuracy: 0.8343 - val_loss: 0.4051 - val_accuracy: 0.8294 Epoch 167/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6294 - accuracy: 0.8346 Epoch 00167: val_accuracy did not improve from 0.84206 Epoch 00167: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6294 - accuracy: 0.8346 - val_loss: 0.3929 - val_accuracy: 0.8386 Epoch 168/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6292 - accuracy: 0.8348 Epoch 00168: val_accuracy did not improve from 0.84206 Epoch 00168: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 92s 21ms/step - loss: 0.6292 - accuracy: 0.8348 - val_loss: 0.4150 - val_accuracy: 0.8284 Epoch 169/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6285 - accuracy: 0.8352 Epoch 00169: val_accuracy did not improve from 0.84206 Epoch 00169: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6285 - accuracy: 0.8352 - val_loss: 0.4115 - val_accuracy: 0.8324 Epoch 170/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6281 - accuracy: 0.8352 Epoch 00170: val_accuracy did not improve from 0.84206 Epoch 00170: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6281 - accuracy: 0.8352 - val_loss: 0.4089 - val_accuracy: 0.8342 Epoch 171/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6285 - accuracy: 0.8349 Epoch 00171: val_accuracy did not improve from 0.84206 Epoch 00171: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6285 - accuracy: 0.8349 - val_loss: 0.4131 - val_accuracy: 0.8276 Epoch 172/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6277 - accuracy: 0.8347 Epoch 00172: val_accuracy did not improve from 0.84206 Epoch 00172: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 91s 21ms/step - loss: 0.6277 - accuracy: 0.8347 - val_loss: 0.4234 - val_accuracy: 0.8279 Epoch 173/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6262 - accuracy: 0.8351 Epoch 00173: val_accuracy did not improve from 0.84206 Epoch 00173: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6262 - accuracy: 0.8351 - val_loss: 0.4133 - val_accuracy: 0.8338 Epoch 174/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6286 - accuracy: 0.8348 Epoch 00174: val_accuracy did not improve from 0.84206 Epoch 00174: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6286 - accuracy: 0.8348 - val_loss: 0.4096 - val_accuracy: 0.8311 Epoch 175/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6266 - accuracy: 0.8355 Epoch 00175: val_accuracy did not improve from 0.84206 Epoch 00175: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 92s 21ms/step - loss: 0.6267 - accuracy: 0.8355 - val_loss: 0.4054 - val_accuracy: 0.8329 Epoch 176/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6280 - accuracy: 0.8363 Epoch 00176: val_accuracy did not improve from 0.84206 Epoch 00176: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6280 - accuracy: 0.8363 - val_loss: 0.3961 - val_accuracy: 0.8377 Epoch 177/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6273 - accuracy: 0.8351 Epoch 00177: val_accuracy did not improve from 0.84206 Epoch 00177: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6273 - accuracy: 0.8351 - val_loss: 0.4056 - val_accuracy: 0.8316 Epoch 178/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6245 - accuracy: 0.8363 Epoch 00178: val_accuracy did not improve from 0.84206 Epoch 00178: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6244 - accuracy: 0.8363 - val_loss: 0.4252 - val_accuracy: 0.8256 Epoch 179/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6264 - accuracy: 0.8347 Epoch 00179: val_accuracy did not improve from 0.84206 Epoch 00179: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6264 - accuracy: 0.8347 - val_loss: 0.3888 - val_accuracy: 0.8366 Epoch 180/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6264 - accuracy: 0.8359 Epoch 00180: val_accuracy did not improve from 0.84206 Epoch 00180: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 87s 20ms/step - loss: 0.6263 - accuracy: 0.8359 - val_loss: 0.4097 - val_accuracy: 0.8328 Epoch 181/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6250 - accuracy: 0.8357 Epoch 00181: val_accuracy did not improve from 0.84206 Epoch 00181: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6250 - accuracy: 0.8357 - val_loss: 0.3998 - val_accuracy: 0.8350 Epoch 182/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6262 - accuracy: 0.8353 Epoch 00182: val_accuracy did not improve from 0.84206 Epoch 00182: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 94s 21ms/step - loss: 0.6262 - accuracy: 0.8353 - val_loss: 0.3992 - val_accuracy: 0.8330 Epoch 183/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6253 - accuracy: 0.8361 Epoch 00183: val_accuracy did not improve from 0.84206 Epoch 00183: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6253 - accuracy: 0.8361 - val_loss: 0.3984 - val_accuracy: 0.8338 Epoch 184/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6249 - accuracy: 0.8344 Epoch 00184: val_accuracy did not improve from 0.84206 Epoch 00184: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6248 - accuracy: 0.8344 - val_loss: 0.3816 - val_accuracy: 0.8398 Epoch 185/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6247 - accuracy: 0.8360 Epoch 00185: val_accuracy did not improve from 0.84206 Epoch 00185: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6247 - accuracy: 0.8360 - val_loss: 0.3983 - val_accuracy: 0.8358 Epoch 186/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6259 - accuracy: 0.8364 Epoch 00186: val_accuracy did not improve from 0.84206 Epoch 00186: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 92s 21ms/step - loss: 0.6259 - accuracy: 0.8364 - val_loss: 0.4101 - val_accuracy: 0.8341 Epoch 187/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6236 - accuracy: 0.8365 Epoch 00187: val_accuracy did not improve from 0.84206 Epoch 00187: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6236 - accuracy: 0.8365 - val_loss: 0.4022 - val_accuracy: 0.8339 Epoch 188/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6238 - accuracy: 0.8356 Epoch 00188: val_accuracy did not improve from 0.84206 Epoch 00188: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6238 - accuracy: 0.8356 - val_loss: 0.4080 - val_accuracy: 0.8297 Epoch 189/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6242 - accuracy: 0.8358 Epoch 00189: val_accuracy did not improve from 0.84206 Epoch 00189: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 91s 21ms/step - loss: 0.6242 - accuracy: 0.8358 - val_loss: 0.4019 - val_accuracy: 0.8321 Epoch 190/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6256 - accuracy: 0.8360 Epoch 00190: val_accuracy did not improve from 0.84206 Epoch 00190: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6256 - accuracy: 0.8360 - val_loss: 0.4053 - val_accuracy: 0.8296 Epoch 191/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6257 - accuracy: 0.8359 Epoch 00191: val_accuracy did not improve from 0.84206 Epoch 00191: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6257 - accuracy: 0.8359 - val_loss: 0.4088 - val_accuracy: 0.8237 Epoch 192/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6245 - accuracy: 0.8359 Epoch 00192: val_accuracy did not improve from 0.84206 Epoch 00192: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6245 - accuracy: 0.8359 - val_loss: 0.4046 - val_accuracy: 0.8297 Epoch 193/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6225 - accuracy: 0.8365 Epoch 00193: val_accuracy did not improve from 0.84206 Epoch 00193: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6225 - accuracy: 0.8365 - val_loss: 0.4106 - val_accuracy: 0.8312 Epoch 194/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6205 - accuracy: 0.8367 Epoch 00194: val_accuracy did not improve from 0.84206 Epoch 00194: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6205 - accuracy: 0.8368 - val_loss: 0.4002 - val_accuracy: 0.8335 Epoch 195/200 4418/4418 [==============================] - ETA: 0s - loss: 0.6225 - accuracy: 0.8363 Epoch 00195: val_accuracy did not improve from 0.84206 Epoch 00195: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 88s 20ms/step - loss: 0.6225 - accuracy: 0.8363 - val_loss: 0.4062 - val_accuracy: 0.8319 Epoch 196/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6223 - accuracy: 0.8366 Epoch 00196: val_accuracy did not improve from 0.84206 Epoch 00196: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 90s 20ms/step - loss: 0.6223 - accuracy: 0.8366 - val_loss: 0.4042 - val_accuracy: 0.8300 Epoch 197/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6217 - accuracy: 0.8372 Epoch 00197: val_accuracy did not improve from 0.84206 Epoch 00197: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6216 - accuracy: 0.8372 - val_loss: 0.4034 - val_accuracy: 0.8300 Epoch 198/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6221 - accuracy: 0.8361 Epoch 00198: val_accuracy did not improve from 0.84206 Epoch 00198: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 87s 20ms/step - loss: 0.6220 - accuracy: 0.8361 - val_loss: 0.4139 - val_accuracy: 0.8303 Epoch 199/200 4416/4418 [============================>.] - ETA: 0s - loss: 0.6209 - accuracy: 0.8369 Epoch 00199: val_accuracy did not improve from 0.84206 Epoch 00199: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 87s 20ms/step - loss: 0.6210 - accuracy: 0.8369 - val_loss: 0.4161 - val_accuracy: 0.8290 Epoch 200/200 4417/4418 [============================>.] - ETA: 0s - loss: 0.6207 - accuracy: 0.8363 Epoch 00200: val_accuracy did not improve from 0.84206 Epoch 00200: val_accuracy did not improve from 0.84206 4418/4418 [==============================] - 89s 20ms/step - loss: 0.6207 - accuracy: 0.8363 - val_loss: 0.4041 - val_accuracy: 0.8359
Next, let's plot the evolution of the metrics over the epochs of training:
# list all data in history
print(history.history.keys())
# summarize history for accuracy
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.savefig("accuracy2.png")
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.title('model train loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.savefig("loss2-1.png")
plt.show()
plt.plot(history.history['val_loss'])
plt.title('model validation loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.savefig("loss2-2.png")
plt.show()
dict_keys(['loss', 'accuracy', 'val_loss', 'val_accuracy'])
Let's load the model with the best performance:
model.load_weights("/content/classifier_weights-improvement-94-0.84.hdf5")
Testing the Model¶
We will examine the accuracy of the model on the train set and the test set:
Let's generate a classification report that will give us insights on the performance of the model on both train set and test set:
from sklearn.metrics import classification_report
y_train_pred=model.predict(muon_hits_x_train_values)
y_train_pred_best=np.argmax(y_train_pred,axis=1)
train_report=classification_report(y_encoded_train,y_train_pred_best)
y_test_pred=model.predict(muon_hits_x_test_values)
y_test_pred_best=np.argmax(y_test_pred,axis=1)
test_report=classification_report(y_encoded_test,y_test_pred_best)
print("Train report \n",train_report)
print("Test report \n",test_report)
Train report precision recall f1-score support 0 0.99 0.90 0.94 2273653 1 0.65 0.59 0.62 447890 2 0.33 0.73 0.46 157029 3 0.36 0.66 0.47 66534 accuracy 0.84 2945106 macro avg 0.58 0.72 0.62 2945106 weighted avg 0.89 0.84 0.86 2945106 Test report precision recall f1-score support 0 0.99 0.90 0.94 252629 1 0.64 0.58 0.61 49766 2 0.32 0.70 0.44 17448 3 0.33 0.61 0.43 7392 accuracy 0.84 327235 macro avg 0.57 0.70 0.61 327235 weighted avg 0.89 0.84 0.85 327235
That's impressive! It seems that adding class weights definitely reduced bias and we reached a weighted f1-score of 0.85 on the test data!
Experimenting with Regression¶
Instead of classifying into four group, let us try to directly predict the q/pT value for each muon momentum:
Splitting the Data¶
Let us first split the dataset into train and test data:
from sklearn.model_selection import train_test_split
muon_hits_x_train,muon_hits_x_test,pt_inverse_labels_train, pt_inverse_labels_test = train_test_split(muon_hits_df_x,pt_inverse_labels,test_size=0.1, random_state=42)
Next, let's create the pipeline for preprocessing the data for regression:
def preprocess_pipeline_reg(df_x,df_y):
#delete columns with missing values more than 70%
df_x=delete_columns(df_x,70)
#impute the other columns with missing values with their corresponding means
df_x=replace_missing_with_mean(df_x)
#standardize the data
from sklearn.preprocessing import StandardScaler
scaler=StandardScaler()
x=scaler.fit_transform(df_x)
return x,df_y.values
muon_hits_x_train_values,pt_inverse_labels_train_values=preprocess_pipeline_reg(muon_hits_x_train, pt_inverse_labels_train)
muon_hits_x_test_values,pt_inverse_labels_test_values=preprocess_pipeline_reg(muon_hits_x_test, pt_inverse_labels_test)
columns deleted: ['Phi angle1', 'Theta angle1', 'Bend angle1', 'Time1', 'Ring1', 'Front/Rear1'] columns deleted: ['Phi angle1', 'Theta angle1', 'Bend angle1', 'Time1', 'Ring1', 'Front/Rear1']
Building the Model¶
Let's first build and compile the model:
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense,Dropout
reg_model=Sequential()
reg_model.add(Dense(512,input_dim=(muon_hits_x_train_values.shape[1]),activation='relu'))
reg_model.add(Dense(256,activation='relu'))
reg_model.add(Dropout(0.1))
reg_model.add(Dense(128,activation='relu'))
reg_model.add(Dense(64,activation='relu'))
reg_model.add(Dropout(0.1))
reg_model.add(Dense(32,activation='relu'))
reg_model.add(Dropout(0.1))
reg_model.add(Dense(1,activation='linear'))
reg_model.compile(optimizer=tf.keras.optimizers.Adam(1e-3),loss='mean_squared_error')
reg_model.summary()
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense (Dense) (None, 512) 17920 _________________________________________________________________ dense_1 (Dense) (None, 256) 131328 _________________________________________________________________ dropout (Dropout) (None, 256) 0 _________________________________________________________________ dense_2 (Dense) (None, 128) 32896 _________________________________________________________________ dense_3 (Dense) (None, 64) 8256 _________________________________________________________________ dropout_1 (Dropout) (None, 64) 0 _________________________________________________________________ dense_4 (Dense) (None, 32) 2080 _________________________________________________________________ dropout_2 (Dropout) (None, 32) 0 _________________________________________________________________ dense_5 (Dense) (None, 1) 33 ================================================================= Total params: 192,513 Trainable params: 192,513 Non-trainable params: 0 _________________________________________________________________
Next, we will set some callbacks for the model including a Checkpoint callback and Early Stopping callback that will both monitor the model's validation mean square error:
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.callbacks import EarlyStopping
filepath="reg_weights2-improvement-{epoch:02d}-{val_loss:.6f}.hdf5"
checkpoint1 = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='min')
checkpoint2= EarlyStopping( monitor='val_loss', patience=3)
callbacks_list = [checkpoint1,checkpoint1]
reg_history = reg_model.fit(muon_hits_x_train_values, pt_inverse_labels_train_values, validation_split=0.25, epochs=200, batch_size=2000,callbacks=callbacks_list)
Epoch 1/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0067 Epoch 00001: val_loss improved from inf to 0.00518, saving model to reg_weights2-improvement-01-0.005183.hdf5 Epoch 00001: val_loss did not improve from 0.00518 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0067 - val_loss: 0.0052 Epoch 2/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0050 Epoch 00002: val_loss improved from 0.00518 to 0.00482, saving model to reg_weights2-improvement-02-0.004817.hdf5 Epoch 00002: val_loss did not improve from 0.00482 1105/1105 [==============================] - 58s 52ms/step - loss: 0.0050 - val_loss: 0.0048 Epoch 3/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0047 Epoch 00003: val_loss improved from 0.00482 to 0.00456, saving model to reg_weights2-improvement-03-0.004556.hdf5 Epoch 00003: val_loss did not improve from 0.00456 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0047 - val_loss: 0.0046 Epoch 4/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0044 Epoch 00004: val_loss improved from 0.00456 to 0.00426, saving model to reg_weights2-improvement-04-0.004255.hdf5 Epoch 00004: val_loss did not improve from 0.00426 1105/1105 [==============================] - 57s 51ms/step - loss: 0.0044 - val_loss: 0.0043 Epoch 5/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0043 Epoch 00005: val_loss improved from 0.00426 to 0.00403, saving model to reg_weights2-improvement-05-0.004026.hdf5 Epoch 00005: val_loss did not improve from 0.00403 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0043 - val_loss: 0.0040 Epoch 6/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0041 Epoch 00006: val_loss improved from 0.00403 to 0.00402, saving model to reg_weights2-improvement-06-0.004021.hdf5 Epoch 00006: val_loss did not improve from 0.00402 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0041 - val_loss: 0.0040 Epoch 7/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0040 Epoch 00007: val_loss did not improve from 0.00402 Epoch 00007: val_loss did not improve from 0.00402 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0040 - val_loss: 0.0041 Epoch 8/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0039 Epoch 00008: val_loss improved from 0.00402 to 0.00387, saving model to reg_weights2-improvement-08-0.003872.hdf5 Epoch 00008: val_loss did not improve from 0.00387 1105/1105 [==============================] - 57s 51ms/step - loss: 0.0039 - val_loss: 0.0039 Epoch 9/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0038 Epoch 00009: val_loss improved from 0.00387 to 0.00366, saving model to reg_weights2-improvement-09-0.003664.hdf5 Epoch 00009: val_loss did not improve from 0.00366 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0038 - val_loss: 0.0037 Epoch 10/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0037 Epoch 00010: val_loss did not improve from 0.00366 Epoch 00010: val_loss did not improve from 0.00366 1105/1105 [==============================] - 57s 51ms/step - loss: 0.0037 - val_loss: 0.0037 Epoch 11/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0036 Epoch 00011: val_loss did not improve from 0.00366 Epoch 00011: val_loss did not improve from 0.00366 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0036 - val_loss: 0.0038 Epoch 12/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0036 Epoch 00012: val_loss improved from 0.00366 to 0.00335, saving model to reg_weights2-improvement-12-0.003351.hdf5 Epoch 00012: val_loss did not improve from 0.00335 1105/1105 [==============================] - 57s 51ms/step - loss: 0.0036 - val_loss: 0.0034 Epoch 13/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0035 Epoch 00013: val_loss did not improve from 0.00335 Epoch 00013: val_loss did not improve from 0.00335 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0035 - val_loss: 0.0039 Epoch 14/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0035 Epoch 00014: val_loss did not improve from 0.00335 Epoch 00014: val_loss did not improve from 0.00335 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0035 - val_loss: 0.0036 Epoch 15/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0034 Epoch 00015: val_loss did not improve from 0.00335 Epoch 00015: val_loss did not improve from 0.00335 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0034 - val_loss: 0.0039 Epoch 16/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0034 Epoch 00016: val_loss did not improve from 0.00335 Epoch 00016: val_loss did not improve from 0.00335 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0034 - val_loss: 0.0035 Epoch 17/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0034 Epoch 00017: val_loss did not improve from 0.00335 Epoch 00017: val_loss did not improve from 0.00335 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0034 - val_loss: 0.0036 Epoch 18/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0033 Epoch 00018: val_loss improved from 0.00335 to 0.00332, saving model to reg_weights2-improvement-18-0.003321.hdf5 Epoch 00018: val_loss did not improve from 0.00332 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0033 - val_loss: 0.0033 Epoch 19/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0033 Epoch 00019: val_loss improved from 0.00332 to 0.00324, saving model to reg_weights2-improvement-19-0.003239.hdf5 Epoch 00019: val_loss did not improve from 0.00324 1105/1105 [==============================] - 57s 51ms/step - loss: 0.0033 - val_loss: 0.0032 Epoch 20/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0033 Epoch 00020: val_loss did not improve from 0.00324 Epoch 00020: val_loss did not improve from 0.00324 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0033 - val_loss: 0.0034 Epoch 21/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0033 Epoch 00021: val_loss did not improve from 0.00324 Epoch 00021: val_loss did not improve from 0.00324 1105/1105 [==============================] - 57s 51ms/step - loss: 0.0033 - val_loss: 0.0035 Epoch 22/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0032 Epoch 00022: val_loss did not improve from 0.00324 Epoch 00022: val_loss did not improve from 0.00324 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0032 - val_loss: 0.0034 Epoch 23/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0032 Epoch 00023: val_loss did not improve from 0.00324 Epoch 00023: val_loss did not improve from 0.00324 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0032 - val_loss: 0.0033 Epoch 24/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0032 Epoch 00024: val_loss did not improve from 0.00324 Epoch 00024: val_loss did not improve from 0.00324 1105/1105 [==============================] - 57s 51ms/step - loss: 0.0032 - val_loss: 0.0034 Epoch 25/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0031 Epoch 00025: val_loss did not improve from 0.00324 Epoch 00025: val_loss did not improve from 0.00324 1105/1105 [==============================] - 57s 51ms/step - loss: 0.0031 - val_loss: 0.0035 Epoch 26/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0031 Epoch 00026: val_loss improved from 0.00324 to 0.00321, saving model to reg_weights2-improvement-26-0.003208.hdf5 Epoch 00026: val_loss did not improve from 0.00321 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0031 - val_loss: 0.0032 Epoch 27/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0031 Epoch 00027: val_loss did not improve from 0.00321 Epoch 00027: val_loss did not improve from 0.00321 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0031 - val_loss: 0.0035 Epoch 28/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0031 Epoch 00028: val_loss did not improve from 0.00321 Epoch 00028: val_loss did not improve from 0.00321 1105/1105 [==============================] - 57s 51ms/step - loss: 0.0031 - val_loss: 0.0032 Epoch 29/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0030 Epoch 00029: val_loss did not improve from 0.00321 Epoch 00029: val_loss did not improve from 0.00321 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0030 - val_loss: 0.0033 Epoch 30/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0030 Epoch 00030: val_loss improved from 0.00321 to 0.00312, saving model to reg_weights2-improvement-30-0.003124.hdf5 Epoch 00030: val_loss did not improve from 0.00312 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0030 - val_loss: 0.0031 Epoch 31/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0030 Epoch 00031: val_loss did not improve from 0.00312 Epoch 00031: val_loss did not improve from 0.00312 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0030 - val_loss: 0.0032 Epoch 32/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0030 Epoch 00032: val_loss did not improve from 0.00312 Epoch 00032: val_loss did not improve from 0.00312 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0030 - val_loss: 0.0031 Epoch 33/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0029 Epoch 00033: val_loss did not improve from 0.00312 Epoch 00033: val_loss did not improve from 0.00312 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0029 - val_loss: 0.0034 Epoch 34/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0029 Epoch 00034: val_loss did not improve from 0.00312 Epoch 00034: val_loss did not improve from 0.00312 1105/1105 [==============================] - 57s 51ms/step - loss: 0.0029 - val_loss: 0.0032 Epoch 35/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0029 Epoch 00035: val_loss improved from 0.00312 to 0.00302, saving model to reg_weights2-improvement-35-0.003019.hdf5 Epoch 00035: val_loss did not improve from 0.00302 1105/1105 [==============================] - 58s 52ms/step - loss: 0.0029 - val_loss: 0.0030 Epoch 36/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0029 Epoch 00036: val_loss improved from 0.00302 to 0.00289, saving model to reg_weights2-improvement-36-0.002890.hdf5 Epoch 00036: val_loss did not improve from 0.00289 1105/1105 [==============================] - 57s 51ms/step - loss: 0.0029 - val_loss: 0.0029 Epoch 37/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0028 Epoch 00037: val_loss did not improve from 0.00289 Epoch 00037: val_loss did not improve from 0.00289 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0028 - val_loss: 0.0030 Epoch 38/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0028 Epoch 00038: val_loss did not improve from 0.00289 Epoch 00038: val_loss did not improve from 0.00289 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0028 - val_loss: 0.0030 Epoch 39/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0028 Epoch 00039: val_loss did not improve from 0.00289 Epoch 00039: val_loss did not improve from 0.00289 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0028 - val_loss: 0.0030 Epoch 40/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0028 Epoch 00040: val_loss did not improve from 0.00289 Epoch 00040: val_loss did not improve from 0.00289 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0028 - val_loss: 0.0030 Epoch 41/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0028 Epoch 00041: val_loss did not improve from 0.00289 Epoch 00041: val_loss did not improve from 0.00289 1105/1105 [==============================] - 58s 52ms/step - loss: 0.0028 - val_loss: 0.0033 Epoch 42/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0027 Epoch 00042: val_loss did not improve from 0.00289 Epoch 00042: val_loss did not improve from 0.00289 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0027 - val_loss: 0.0031 Epoch 43/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0027 Epoch 00043: val_loss improved from 0.00289 to 0.00279, saving model to reg_weights2-improvement-43-0.002789.hdf5 Epoch 00043: val_loss did not improve from 0.00279 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0027 - val_loss: 0.0028 Epoch 44/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0027 Epoch 00044: val_loss did not improve from 0.00279 Epoch 00044: val_loss did not improve from 0.00279 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0027 - val_loss: 0.0031 Epoch 45/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0027 Epoch 00045: val_loss did not improve from 0.00279 Epoch 00045: val_loss did not improve from 0.00279 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0027 - val_loss: 0.0030 Epoch 46/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0027 Epoch 00046: val_loss did not improve from 0.00279 Epoch 00046: val_loss did not improve from 0.00279 1105/1105 [==============================] - 58s 53ms/step - loss: 0.0027 - val_loss: 0.0030 Epoch 47/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0026 Epoch 00047: val_loss did not improve from 0.00279 Epoch 00047: val_loss did not improve from 0.00279 1105/1105 [==============================] - 58s 52ms/step - loss: 0.0026 - val_loss: 0.0031 Epoch 48/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0026 Epoch 00048: val_loss did not improve from 0.00279 Epoch 00048: val_loss did not improve from 0.00279 1105/1105 [==============================] - 58s 52ms/step - loss: 0.0026 - val_loss: 0.0032 Epoch 49/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0026 Epoch 00049: val_loss did not improve from 0.00279 Epoch 00049: val_loss did not improve from 0.00279 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0026 - val_loss: 0.0031 Epoch 50/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0026 Epoch 00050: val_loss did not improve from 0.00279 Epoch 00050: val_loss did not improve from 0.00279 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0026 - val_loss: 0.0030 Epoch 51/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0026 Epoch 00051: val_loss did not improve from 0.00279 Epoch 00051: val_loss did not improve from 0.00279 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0026 - val_loss: 0.0030 Epoch 52/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0025 Epoch 00052: val_loss did not improve from 0.00279 Epoch 00052: val_loss did not improve from 0.00279 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0025 - val_loss: 0.0030 Epoch 53/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0025 Epoch 00053: val_loss did not improve from 0.00279 Epoch 00053: val_loss did not improve from 0.00279 1105/1105 [==============================] - 58s 52ms/step - loss: 0.0025 - val_loss: 0.0028 Epoch 54/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0025 Epoch 00054: val_loss did not improve from 0.00279 Epoch 00054: val_loss did not improve from 0.00279 1105/1105 [==============================] - 58s 52ms/step - loss: 0.0025 - val_loss: 0.0028 Epoch 55/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0025 Epoch 00055: val_loss did not improve from 0.00279 Epoch 00055: val_loss did not improve from 0.00279 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0025 - val_loss: 0.0030 Epoch 56/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0024 Epoch 00056: val_loss did not improve from 0.00279 Epoch 00056: val_loss did not improve from 0.00279 1105/1105 [==============================] - 57s 51ms/step - loss: 0.0024 - val_loss: 0.0028 Epoch 57/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0024 Epoch 00057: val_loss did not improve from 0.00279 Epoch 00057: val_loss did not improve from 0.00279 1105/1105 [==============================] - 58s 53ms/step - loss: 0.0024 - val_loss: 0.0030 Epoch 58/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0024 Epoch 00058: val_loss did not improve from 0.00279 Epoch 00058: val_loss did not improve from 0.00279 1105/1105 [==============================] - 58s 52ms/step - loss: 0.0024 - val_loss: 0.0028 Epoch 59/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0023 Epoch 00059: val_loss did not improve from 0.00279 Epoch 00059: val_loss did not improve from 0.00279 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0023 - val_loss: 0.0029 Epoch 60/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0023 Epoch 00060: val_loss did not improve from 0.00279 Epoch 00060: val_loss did not improve from 0.00279 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0023 - val_loss: 0.0031 Epoch 61/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0023 Epoch 00061: val_loss did not improve from 0.00279 Epoch 00061: val_loss did not improve from 0.00279 1105/1105 [==============================] - 58s 52ms/step - loss: 0.0023 - val_loss: 0.0028 Epoch 62/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0023 Epoch 00062: val_loss improved from 0.00279 to 0.00251, saving model to reg_weights2-improvement-62-0.002510.hdf5 Epoch 00062: val_loss did not improve from 0.00251 1105/1105 [==============================] - 58s 53ms/step - loss: 0.0023 - val_loss: 0.0025 Epoch 63/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0022 Epoch 00063: val_loss did not improve from 0.00251 Epoch 00063: val_loss did not improve from 0.00251 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0022 - val_loss: 0.0028 Epoch 64/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0022 Epoch 00064: val_loss did not improve from 0.00251 Epoch 00064: val_loss did not improve from 0.00251 1105/1105 [==============================] - 58s 52ms/step - loss: 0.0022 - val_loss: 0.0029 Epoch 65/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0022 Epoch 00065: val_loss did not improve from 0.00251 Epoch 00065: val_loss did not improve from 0.00251 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0022 - val_loss: 0.0028 Epoch 66/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0022 Epoch 00066: val_loss did not improve from 0.00251 Epoch 00066: val_loss did not improve from 0.00251 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0022 - val_loss: 0.0026 Epoch 67/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0022 Epoch 00067: val_loss did not improve from 0.00251 Epoch 00067: val_loss did not improve from 0.00251 1105/1105 [==============================] - 58s 52ms/step - loss: 0.0022 - val_loss: 0.0027 Epoch 68/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0022 Epoch 00068: val_loss did not improve from 0.00251 Epoch 00068: val_loss did not improve from 0.00251 1105/1105 [==============================] - 58s 53ms/step - loss: 0.0022 - val_loss: 0.0026 Epoch 69/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0021 Epoch 00069: val_loss did not improve from 0.00251 Epoch 00069: val_loss did not improve from 0.00251 1105/1105 [==============================] - 58s 53ms/step - loss: 0.0021 - val_loss: 0.0029 Epoch 70/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0021 Epoch 00070: val_loss did not improve from 0.00251 Epoch 00070: val_loss did not improve from 0.00251 1105/1105 [==============================] - 58s 52ms/step - loss: 0.0021 - val_loss: 0.0028 Epoch 71/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0021 Epoch 00071: val_loss did not improve from 0.00251 Epoch 00071: val_loss did not improve from 0.00251 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0021 - val_loss: 0.0026 Epoch 72/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0021 Epoch 00072: val_loss did not improve from 0.00251 Epoch 00072: val_loss did not improve from 0.00251 1105/1105 [==============================] - 58s 52ms/step - loss: 0.0021 - val_loss: 0.0026 Epoch 73/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0021 Epoch 00073: val_loss did not improve from 0.00251 Epoch 00073: val_loss did not improve from 0.00251 1105/1105 [==============================] - 58s 53ms/step - loss: 0.0021 - val_loss: 0.0026 Epoch 74/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0021 Epoch 00074: val_loss did not improve from 0.00251 Epoch 00074: val_loss did not improve from 0.00251 1105/1105 [==============================] - 58s 53ms/step - loss: 0.0021 - val_loss: 0.0026 Epoch 75/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0021 Epoch 00075: val_loss improved from 0.00251 to 0.00243, saving model to reg_weights2-improvement-75-0.002429.hdf5 Epoch 00075: val_loss did not improve from 0.00243 1105/1105 [==============================] - 58s 53ms/step - loss: 0.0021 - val_loss: 0.0024 Epoch 76/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0020 Epoch 00076: val_loss did not improve from 0.00243 Epoch 00076: val_loss did not improve from 0.00243 1105/1105 [==============================] - 59s 53ms/step - loss: 0.0020 - val_loss: 0.0026 Epoch 77/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0020 Epoch 00077: val_loss improved from 0.00243 to 0.00242, saving model to reg_weights2-improvement-77-0.002417.hdf5 Epoch 00077: val_loss did not improve from 0.00242 1105/1105 [==============================] - 59s 53ms/step - loss: 0.0020 - val_loss: 0.0024 Epoch 78/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0020 Epoch 00078: val_loss did not improve from 0.00242 Epoch 00078: val_loss did not improve from 0.00242 1105/1105 [==============================] - 60s 54ms/step - loss: 0.0020 - val_loss: 0.0025 Epoch 79/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0020 Epoch 00079: val_loss did not improve from 0.00242 Epoch 00079: val_loss did not improve from 0.00242 1105/1105 [==============================] - 59s 54ms/step - loss: 0.0020 - val_loss: 0.0026 Epoch 80/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0020 Epoch 00080: val_loss did not improve from 0.00242 Epoch 00080: val_loss did not improve from 0.00242 1105/1105 [==============================] - 59s 54ms/step - loss: 0.0020 - val_loss: 0.0026 Epoch 81/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0020 Epoch 00081: val_loss improved from 0.00242 to 0.00223, saving model to reg_weights2-improvement-81-0.002230.hdf5 Epoch 00081: val_loss did not improve from 0.00223 1105/1105 [==============================] - 59s 53ms/step - loss: 0.0020 - val_loss: 0.0022 Epoch 82/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0020 Epoch 00082: val_loss did not improve from 0.00223 Epoch 00082: val_loss did not improve from 0.00223 1105/1105 [==============================] - 59s 53ms/step - loss: 0.0020 - val_loss: 0.0025 Epoch 83/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0020 Epoch 00083: val_loss did not improve from 0.00223 Epoch 00083: val_loss did not improve from 0.00223 1105/1105 [==============================] - 59s 53ms/step - loss: 0.0020 - val_loss: 0.0025 Epoch 84/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0019 Epoch 00084: val_loss did not improve from 0.00223 Epoch 00084: val_loss did not improve from 0.00223 1105/1105 [==============================] - 60s 54ms/step - loss: 0.0019 - val_loss: 0.0026 Epoch 85/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0019 Epoch 00085: val_loss did not improve from 0.00223 Epoch 00085: val_loss did not improve from 0.00223 1105/1105 [==============================] - 60s 54ms/step - loss: 0.0019 - val_loss: 0.0024 Epoch 86/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0019 Epoch 00086: val_loss did not improve from 0.00223 Epoch 00086: val_loss did not improve from 0.00223 1105/1105 [==============================] - 60s 54ms/step - loss: 0.0019 - val_loss: 0.0026 Epoch 87/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0019 Epoch 00087: val_loss did not improve from 0.00223 Epoch 00087: val_loss did not improve from 0.00223 1105/1105 [==============================] - 60s 54ms/step - loss: 0.0019 - val_loss: 0.0027 Epoch 88/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0019 Epoch 00088: val_loss did not improve from 0.00223 Epoch 00088: val_loss did not improve from 0.00223 1105/1105 [==============================] - 60s 55ms/step - loss: 0.0019 - val_loss: 0.0025 Epoch 89/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0019 Epoch 00089: val_loss did not improve from 0.00223 Epoch 00089: val_loss did not improve from 0.00223 1105/1105 [==============================] - 61s 55ms/step - loss: 0.0019 - val_loss: 0.0025 Epoch 90/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0019 Epoch 00090: val_loss did not improve from 0.00223 Epoch 00090: val_loss did not improve from 0.00223 1105/1105 [==============================] - 58s 52ms/step - loss: 0.0019 - val_loss: 0.0028 Epoch 91/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0018 Epoch 00091: val_loss did not improve from 0.00223 Epoch 00091: val_loss did not improve from 0.00223 1105/1105 [==============================] - 59s 53ms/step - loss: 0.0018 - val_loss: 0.0027 Epoch 92/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0018 Epoch 00092: val_loss did not improve from 0.00223 Epoch 00092: val_loss did not improve from 0.00223 1105/1105 [==============================] - 58s 53ms/step - loss: 0.0018 - val_loss: 0.0026 Epoch 93/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0018 Epoch 00093: val_loss did not improve from 0.00223 Epoch 00093: val_loss did not improve from 0.00223 1105/1105 [==============================] - 58s 52ms/step - loss: 0.0018 - val_loss: 0.0026 Epoch 94/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0018 Epoch 00094: val_loss did not improve from 0.00223 Epoch 00094: val_loss did not improve from 0.00223 1105/1105 [==============================] - 58s 53ms/step - loss: 0.0018 - val_loss: 0.0025 Epoch 95/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0018 Epoch 00095: val_loss did not improve from 0.00223 Epoch 00095: val_loss did not improve from 0.00223 1105/1105 [==============================] - 58s 52ms/step - loss: 0.0018 - val_loss: 0.0026 Epoch 96/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0018 Epoch 00096: val_loss did not improve from 0.00223 Epoch 00096: val_loss did not improve from 0.00223 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0018 - val_loss: 0.0026 Epoch 97/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0018 Epoch 00097: val_loss did not improve from 0.00223 Epoch 00097: val_loss did not improve from 0.00223 1105/1105 [==============================] - 58s 53ms/step - loss: 0.0018 - val_loss: 0.0025 Epoch 98/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0017 Epoch 00098: val_loss did not improve from 0.00223 Epoch 00098: val_loss did not improve from 0.00223 1105/1105 [==============================] - 60s 54ms/step - loss: 0.0017 - val_loss: 0.0024 Epoch 99/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0017 Epoch 00099: val_loss did not improve from 0.00223 Epoch 00099: val_loss did not improve from 0.00223 1105/1105 [==============================] - 59s 53ms/step - loss: 0.0017 - val_loss: 0.0022 Epoch 100/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0017 Epoch 00100: val_loss did not improve from 0.00223 Epoch 00100: val_loss did not improve from 0.00223 1105/1105 [==============================] - 59s 54ms/step - loss: 0.0017 - val_loss: 0.0026 Epoch 101/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0017 Epoch 00101: val_loss did not improve from 0.00223 Epoch 00101: val_loss did not improve from 0.00223 1105/1105 [==============================] - 58s 52ms/step - loss: 0.0017 - val_loss: 0.0025 Epoch 102/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0017 Epoch 00102: val_loss did not improve from 0.00223 Epoch 00102: val_loss did not improve from 0.00223 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0017 - val_loss: 0.0028 Epoch 103/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0017 Epoch 00103: val_loss did not improve from 0.00223 Epoch 00103: val_loss did not improve from 0.00223 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0017 - val_loss: 0.0028 Epoch 104/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0017 Epoch 00104: val_loss did not improve from 0.00223 Epoch 00104: val_loss did not improve from 0.00223 1105/1105 [==============================] - 58s 52ms/step - loss: 0.0017 - val_loss: 0.0026 Epoch 105/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0017 Epoch 00105: val_loss did not improve from 0.00223 Epoch 00105: val_loss did not improve from 0.00223 1105/1105 [==============================] - 59s 54ms/step - loss: 0.0017 - val_loss: 0.0027 Epoch 106/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0017 Epoch 00106: val_loss did not improve from 0.00223 Epoch 00106: val_loss did not improve from 0.00223 1105/1105 [==============================] - 58s 53ms/step - loss: 0.0017 - val_loss: 0.0025 Epoch 107/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0017 Epoch 00107: val_loss did not improve from 0.00223 Epoch 00107: val_loss did not improve from 0.00223 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0017 - val_loss: 0.0023 Epoch 108/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0017 Epoch 00108: val_loss did not improve from 0.00223 Epoch 00108: val_loss did not improve from 0.00223 1105/1105 [==============================] - 58s 53ms/step - loss: 0.0017 - val_loss: 0.0024 Epoch 109/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0016 Epoch 00109: val_loss did not improve from 0.00223 Epoch 00109: val_loss did not improve from 0.00223 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0016 - val_loss: 0.0027 Epoch 110/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0016 Epoch 00110: val_loss improved from 0.00223 to 0.00219, saving model to reg_weights2-improvement-110-0.002186.hdf5 Epoch 00110: val_loss did not improve from 0.00219 1105/1105 [==============================] - 59s 53ms/step - loss: 0.0016 - val_loss: 0.0022 Epoch 111/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0016 Epoch 00111: val_loss did not improve from 0.00219 Epoch 00111: val_loss did not improve from 0.00219 1105/1105 [==============================] - 58s 52ms/step - loss: 0.0016 - val_loss: 0.0022 Epoch 112/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0016 Epoch 00112: val_loss did not improve from 0.00219 Epoch 00112: val_loss did not improve from 0.00219 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0016 - val_loss: 0.0026 Epoch 113/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0016 Epoch 00113: val_loss did not improve from 0.00219 Epoch 00113: val_loss did not improve from 0.00219 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0016 - val_loss: 0.0026 Epoch 114/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0016 Epoch 00114: val_loss did not improve from 0.00219 Epoch 00114: val_loss did not improve from 0.00219 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0016 - val_loss: 0.0024 Epoch 115/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0016 Epoch 00115: val_loss did not improve from 0.00219 Epoch 00115: val_loss did not improve from 0.00219 1105/1105 [==============================] - 57s 51ms/step - loss: 0.0016 - val_loss: 0.0022 Epoch 116/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0016 Epoch 00116: val_loss did not improve from 0.00219 Epoch 00116: val_loss did not improve from 0.00219 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0016 - val_loss: 0.0025 Epoch 117/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0015 Epoch 00117: val_loss did not improve from 0.00219 Epoch 00117: val_loss did not improve from 0.00219 1105/1105 [==============================] - 57s 51ms/step - loss: 0.0015 - val_loss: 0.0026 Epoch 118/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0015 Epoch 00118: val_loss did not improve from 0.00219 Epoch 00118: val_loss did not improve from 0.00219 1105/1105 [==============================] - 57s 51ms/step - loss: 0.0015 - val_loss: 0.0024 Epoch 119/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0015 Epoch 00119: val_loss did not improve from 0.00219 Epoch 00119: val_loss did not improve from 0.00219 1105/1105 [==============================] - 57s 51ms/step - loss: 0.0015 - val_loss: 0.0026 Epoch 120/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0015 Epoch 00120: val_loss did not improve from 0.00219 Epoch 00120: val_loss did not improve from 0.00219 1105/1105 [==============================] - 57s 51ms/step - loss: 0.0015 - val_loss: 0.0024 Epoch 121/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0015 Epoch 00121: val_loss did not improve from 0.00219 Epoch 00121: val_loss did not improve from 0.00219 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0015 - val_loss: 0.0026 Epoch 122/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0015 Epoch 00122: val_loss did not improve from 0.00219 Epoch 00122: val_loss did not improve from 0.00219 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0015 - val_loss: 0.0024 Epoch 123/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0015 Epoch 00123: val_loss did not improve from 0.00219 Epoch 00123: val_loss did not improve from 0.00219 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0015 - val_loss: 0.0026 Epoch 124/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0015 Epoch 00124: val_loss did not improve from 0.00219 Epoch 00124: val_loss did not improve from 0.00219 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0015 - val_loss: 0.0023 Epoch 125/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0015 Epoch 00125: val_loss did not improve from 0.00219 Epoch 00125: val_loss did not improve from 0.00219 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0015 - val_loss: 0.0025 Epoch 126/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0015 Epoch 00126: val_loss did not improve from 0.00219 Epoch 00126: val_loss did not improve from 0.00219 1105/1105 [==============================] - 57s 51ms/step - loss: 0.0015 - val_loss: 0.0028 Epoch 127/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0015 Epoch 00127: val_loss did not improve from 0.00219 Epoch 00127: val_loss did not improve from 0.00219 1105/1105 [==============================] - 58s 52ms/step - loss: 0.0015 - val_loss: 0.0024 Epoch 128/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0015 Epoch 00128: val_loss did not improve from 0.00219 Epoch 00128: val_loss did not improve from 0.00219 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0015 - val_loss: 0.0025 Epoch 129/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0015 Epoch 00129: val_loss did not improve from 0.00219 Epoch 00129: val_loss did not improve from 0.00219 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0015 - val_loss: 0.0025 Epoch 130/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0015 Epoch 00130: val_loss did not improve from 0.00219 Epoch 00130: val_loss did not improve from 0.00219 1105/1105 [==============================] - 58s 52ms/step - loss: 0.0015 - val_loss: 0.0023 Epoch 131/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0014 Epoch 00131: val_loss did not improve from 0.00219 Epoch 00131: val_loss did not improve from 0.00219 1105/1105 [==============================] - 57s 51ms/step - loss: 0.0014 - val_loss: 0.0025 Epoch 132/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0014 Epoch 00132: val_loss did not improve from 0.00219 Epoch 00132: val_loss did not improve from 0.00219 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0014 - val_loss: 0.0024 Epoch 133/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0014 Epoch 00133: val_loss did not improve from 0.00219 Epoch 00133: val_loss did not improve from 0.00219 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0014 - val_loss: 0.0024 Epoch 134/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0014 Epoch 00134: val_loss did not improve from 0.00219 Epoch 00134: val_loss did not improve from 0.00219 1105/1105 [==============================] - 56s 50ms/step - loss: 0.0014 - val_loss: 0.0025 Epoch 135/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0014 Epoch 00135: val_loss did not improve from 0.00219 Epoch 00135: val_loss did not improve from 0.00219 1105/1105 [==============================] - 56s 50ms/step - loss: 0.0014 - val_loss: 0.0028 Epoch 136/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0014 Epoch 00136: val_loss did not improve from 0.00219 Epoch 00136: val_loss did not improve from 0.00219 1105/1105 [==============================] - 57s 51ms/step - loss: 0.0014 - val_loss: 0.0022 Epoch 137/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0014 Epoch 00137: val_loss did not improve from 0.00219 Epoch 00137: val_loss did not improve from 0.00219 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0014 - val_loss: 0.0023 Epoch 138/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0014 Epoch 00138: val_loss did not improve from 0.00219 Epoch 00138: val_loss did not improve from 0.00219 1105/1105 [==============================] - 61s 55ms/step - loss: 0.0014 - val_loss: 0.0024 Epoch 139/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0014 Epoch 00139: val_loss did not improve from 0.00219 Epoch 00139: val_loss did not improve from 0.00219 1105/1105 [==============================] - 59s 53ms/step - loss: 0.0014 - val_loss: 0.0024 Epoch 140/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0014 Epoch 00140: val_loss did not improve from 0.00219 Epoch 00140: val_loss did not improve from 0.00219 1105/1105 [==============================] - 61s 55ms/step - loss: 0.0014 - val_loss: 0.0022 Epoch 141/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0014 Epoch 00141: val_loss did not improve from 0.00219 Epoch 00141: val_loss did not improve from 0.00219 1105/1105 [==============================] - 60s 54ms/step - loss: 0.0014 - val_loss: 0.0025 Epoch 142/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0014 Epoch 00142: val_loss did not improve from 0.00219 Epoch 00142: val_loss did not improve from 0.00219 1105/1105 [==============================] - 58s 52ms/step - loss: 0.0014 - val_loss: 0.0022 Epoch 143/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0014 Epoch 00143: val_loss did not improve from 0.00219 Epoch 00143: val_loss did not improve from 0.00219 1105/1105 [==============================] - 60s 55ms/step - loss: 0.0014 - val_loss: 0.0026 Epoch 144/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0014 Epoch 00144: val_loss did not improve from 0.00219 Epoch 00144: val_loss did not improve from 0.00219 1105/1105 [==============================] - 58s 53ms/step - loss: 0.0014 - val_loss: 0.0027 Epoch 145/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0014 Epoch 00145: val_loss did not improve from 0.00219 Epoch 00145: val_loss did not improve from 0.00219 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0014 - val_loss: 0.0027 Epoch 146/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0014 Epoch 00146: val_loss did not improve from 0.00219 Epoch 00146: val_loss did not improve from 0.00219 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0014 - val_loss: 0.0024 Epoch 147/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0014 Epoch 00147: val_loss did not improve from 0.00219 Epoch 00147: val_loss did not improve from 0.00219 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0014 - val_loss: 0.0025 Epoch 148/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0014 Epoch 00148: val_loss did not improve from 0.00219 Epoch 00148: val_loss did not improve from 0.00219 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0014 - val_loss: 0.0024 Epoch 149/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0014 Epoch 00149: val_loss did not improve from 0.00219 Epoch 00149: val_loss did not improve from 0.00219 1105/1105 [==============================] - 57s 51ms/step - loss: 0.0014 - val_loss: 0.0025 Epoch 150/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0014 Epoch 00150: val_loss did not improve from 0.00219 Epoch 00150: val_loss did not improve from 0.00219 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0014 - val_loss: 0.0023 Epoch 151/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0014 Epoch 00151: val_loss did not improve from 0.00219 Epoch 00151: val_loss did not improve from 0.00219 1105/1105 [==============================] - 58s 52ms/step - loss: 0.0014 - val_loss: 0.0023 Epoch 152/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0014 Epoch 00152: val_loss did not improve from 0.00219 Epoch 00152: val_loss did not improve from 0.00219 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0014 - val_loss: 0.0024 Epoch 153/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0014 Epoch 00153: val_loss did not improve from 0.00219 Epoch 00153: val_loss did not improve from 0.00219 1105/1105 [==============================] - 58s 52ms/step - loss: 0.0014 - val_loss: 0.0025 Epoch 154/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0014 Epoch 00154: val_loss did not improve from 0.00219 Epoch 00154: val_loss did not improve from 0.00219 1105/1105 [==============================] - 58s 53ms/step - loss: 0.0014 - val_loss: 0.0025 Epoch 155/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0013 Epoch 00155: val_loss did not improve from 0.00219 Epoch 00155: val_loss did not improve from 0.00219 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0013 - val_loss: 0.0024 Epoch 156/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0014 Epoch 00156: val_loss did not improve from 0.00219 Epoch 00156: val_loss did not improve from 0.00219 1105/1105 [==============================] - 58s 52ms/step - loss: 0.0014 - val_loss: 0.0022 Epoch 157/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0013 Epoch 00157: val_loss did not improve from 0.00219 Epoch 00157: val_loss did not improve from 0.00219 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0013 - val_loss: 0.0026 Epoch 158/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0013 Epoch 00158: val_loss did not improve from 0.00219 Epoch 00158: val_loss did not improve from 0.00219 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0013 - val_loss: 0.0024 Epoch 159/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0013 Epoch 00159: val_loss did not improve from 0.00219 Epoch 00159: val_loss did not improve from 0.00219 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0013 - val_loss: 0.0025 Epoch 160/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0013 Epoch 00160: val_loss did not improve from 0.00219 Epoch 00160: val_loss did not improve from 0.00219 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0013 - val_loss: 0.0024 Epoch 161/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0013 Epoch 00161: val_loss did not improve from 0.00219 Epoch 00161: val_loss did not improve from 0.00219 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0013 - val_loss: 0.0023 Epoch 162/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0013 Epoch 00162: val_loss did not improve from 0.00219 Epoch 00162: val_loss did not improve from 0.00219 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0013 - val_loss: 0.0025 Epoch 163/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0013 Epoch 00163: val_loss did not improve from 0.00219 Epoch 00163: val_loss did not improve from 0.00219 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0013 - val_loss: 0.0024 Epoch 164/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0013 Epoch 00164: val_loss did not improve from 0.00219 Epoch 00164: val_loss did not improve from 0.00219 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0013 - val_loss: 0.0029 Epoch 165/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0013 Epoch 00165: val_loss did not improve from 0.00219 Epoch 00165: val_loss did not improve from 0.00219 1105/1105 [==============================] - 59s 53ms/step - loss: 0.0013 - val_loss: 0.0024 Epoch 166/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0013 Epoch 00166: val_loss did not improve from 0.00219 Epoch 00166: val_loss did not improve from 0.00219 1105/1105 [==============================] - 58s 52ms/step - loss: 0.0013 - val_loss: 0.0026 Epoch 167/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0013 Epoch 00167: val_loss did not improve from 0.00219 Epoch 00167: val_loss did not improve from 0.00219 1105/1105 [==============================] - 59s 53ms/step - loss: 0.0013 - val_loss: 0.0025 Epoch 168/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0013 Epoch 00168: val_loss did not improve from 0.00219 Epoch 00168: val_loss did not improve from 0.00219 1105/1105 [==============================] - 62s 56ms/step - loss: 0.0013 - val_loss: 0.0024 Epoch 169/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0013 Epoch 00169: val_loss did not improve from 0.00219 Epoch 00169: val_loss did not improve from 0.00219 1105/1105 [==============================] - 62s 56ms/step - loss: 0.0013 - val_loss: 0.0027 Epoch 170/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0013 Epoch 00170: val_loss did not improve from 0.00219 Epoch 00170: val_loss did not improve from 0.00219 1105/1105 [==============================] - 63s 57ms/step - loss: 0.0013 - val_loss: 0.0023 Epoch 171/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0013 Epoch 00171: val_loss did not improve from 0.00219 Epoch 00171: val_loss did not improve from 0.00219 1105/1105 [==============================] - 61s 55ms/step - loss: 0.0013 - val_loss: 0.0024 Epoch 172/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0013 Epoch 00172: val_loss did not improve from 0.00219 Epoch 00172: val_loss did not improve from 0.00219 1105/1105 [==============================] - 58s 53ms/step - loss: 0.0013 - val_loss: 0.0026 Epoch 173/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0013 Epoch 00173: val_loss did not improve from 0.00219 Epoch 00173: val_loss did not improve from 0.00219 1105/1105 [==============================] - 61s 56ms/step - loss: 0.0013 - val_loss: 0.0024 Epoch 174/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0013 Epoch 00174: val_loss did not improve from 0.00219 Epoch 00174: val_loss did not improve from 0.00219 1105/1105 [==============================] - 60s 54ms/step - loss: 0.0013 - val_loss: 0.0027 Epoch 175/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0013 Epoch 00175: val_loss did not improve from 0.00219 Epoch 00175: val_loss did not improve from 0.00219 1105/1105 [==============================] - 62s 57ms/step - loss: 0.0013 - val_loss: 0.0023 Epoch 176/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0013 Epoch 00176: val_loss did not improve from 0.00219 Epoch 00176: val_loss did not improve from 0.00219 1105/1105 [==============================] - 60s 54ms/step - loss: 0.0013 - val_loss: 0.0024 Epoch 177/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0013 Epoch 00177: val_loss did not improve from 0.00219 Epoch 00177: val_loss did not improve from 0.00219 1105/1105 [==============================] - 59s 53ms/step - loss: 0.0013 - val_loss: 0.0027 Epoch 178/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0013 Epoch 00178: val_loss did not improve from 0.00219 Epoch 00178: val_loss did not improve from 0.00219 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0013 - val_loss: 0.0026 Epoch 179/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0013 Epoch 00179: val_loss did not improve from 0.00219 Epoch 00179: val_loss did not improve from 0.00219 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0013 - val_loss: 0.0025 Epoch 180/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0013 Epoch 00180: val_loss did not improve from 0.00219 Epoch 00180: val_loss did not improve from 0.00219 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0013 - val_loss: 0.0026 Epoch 181/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0012 Epoch 00181: val_loss did not improve from 0.00219 Epoch 00181: val_loss did not improve from 0.00219 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0012 - val_loss: 0.0026 Epoch 182/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0012 Epoch 00182: val_loss improved from 0.00219 to 0.00214, saving model to reg_weights2-improvement-182-0.002136.hdf5 Epoch 00182: val_loss did not improve from 0.00214 1105/1105 [==============================] - 57s 51ms/step - loss: 0.0012 - val_loss: 0.0021 Epoch 183/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0013 Epoch 00183: val_loss did not improve from 0.00214 Epoch 00183: val_loss did not improve from 0.00214 1105/1105 [==============================] - 57s 51ms/step - loss: 0.0013 - val_loss: 0.0025 Epoch 184/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0012 Epoch 00184: val_loss did not improve from 0.00214 Epoch 00184: val_loss did not improve from 0.00214 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0012 - val_loss: 0.0026 Epoch 185/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0013 Epoch 00185: val_loss did not improve from 0.00214 Epoch 00185: val_loss did not improve from 0.00214 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0013 - val_loss: 0.0026 Epoch 186/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0012 Epoch 00186: val_loss did not improve from 0.00214 Epoch 00186: val_loss did not improve from 0.00214 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0012 - val_loss: 0.0025 Epoch 187/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0012 Epoch 00187: val_loss did not improve from 0.00214 Epoch 00187: val_loss did not improve from 0.00214 1105/1105 [==============================] - 59s 54ms/step - loss: 0.0012 - val_loss: 0.0023 Epoch 188/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0012 Epoch 00188: val_loss did not improve from 0.00214 Epoch 00188: val_loss did not improve from 0.00214 1105/1105 [==============================] - 60s 54ms/step - loss: 0.0012 - val_loss: 0.0025 Epoch 189/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0012 Epoch 00189: val_loss did not improve from 0.00214 Epoch 00189: val_loss did not improve from 0.00214 1105/1105 [==============================] - 59s 53ms/step - loss: 0.0012 - val_loss: 0.0028 Epoch 190/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0012 Epoch 00190: val_loss did not improve from 0.00214 Epoch 00190: val_loss did not improve from 0.00214 1105/1105 [==============================] - 59s 53ms/step - loss: 0.0012 - val_loss: 0.0027 Epoch 191/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0012 Epoch 00191: val_loss did not improve from 0.00214 Epoch 00191: val_loss did not improve from 0.00214 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0012 - val_loss: 0.0024 Epoch 192/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0012 Epoch 00192: val_loss did not improve from 0.00214 Epoch 00192: val_loss did not improve from 0.00214 1105/1105 [==============================] - 58s 52ms/step - loss: 0.0012 - val_loss: 0.0027 Epoch 193/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0012 Epoch 00193: val_loss did not improve from 0.00214 Epoch 00193: val_loss did not improve from 0.00214 1105/1105 [==============================] - 60s 54ms/step - loss: 0.0012 - val_loss: 0.0026 Epoch 194/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0012 Epoch 00194: val_loss did not improve from 0.00214 Epoch 00194: val_loss did not improve from 0.00214 1105/1105 [==============================] - 59s 54ms/step - loss: 0.0012 - val_loss: 0.0024 Epoch 195/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0012 Epoch 00195: val_loss did not improve from 0.00214 Epoch 00195: val_loss did not improve from 0.00214 1105/1105 [==============================] - 60s 54ms/step - loss: 0.0012 - val_loss: 0.0023 Epoch 196/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0012 Epoch 00196: val_loss did not improve from 0.00214 Epoch 00196: val_loss did not improve from 0.00214 1105/1105 [==============================] - 60s 54ms/step - loss: 0.0012 - val_loss: 0.0023 Epoch 197/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0012 Epoch 00197: val_loss did not improve from 0.00214 Epoch 00197: val_loss did not improve from 0.00214 1105/1105 [==============================] - 57s 52ms/step - loss: 0.0012 - val_loss: 0.0023 Epoch 198/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0012 Epoch 00198: val_loss did not improve from 0.00214 Epoch 00198: val_loss did not improve from 0.00214 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0012 - val_loss: 0.0024 Epoch 199/200 1105/1105 [==============================] - ETA: 0s - loss: 0.0012 Epoch 00199: val_loss did not improve from 0.00214 Epoch 00199: val_loss did not improve from 0.00214 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0012 - val_loss: 0.0024 Epoch 200/200 1104/1105 [============================>.] - ETA: 0s - loss: 0.0012 Epoch 00200: val_loss did not improve from 0.00214 Epoch 00200: val_loss did not improve from 0.00214 1105/1105 [==============================] - 56s 51ms/step - loss: 0.0012 - val_loss: 0.0026
Next, let's plot the evolution of the model over the epochs of training:
# list all data in history
print(reg_history.history.keys())
# summarize history for loss
plt.plot(reg_history.history['loss'])
plt.plot(reg_history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.savefig("reg_loss.png")
plt.show()
dict_keys(['loss', 'val_loss'])
Model Testing¶
Let's evaluate the model performance over the train and test set:
reg_model.load_weights("/content/reg_weights2-improvement-81-0.002230.hdf5")
from sklearn.metrics import mean_squared_error
y_train_pred_reg=reg_model.predict(muon_hits_x_train_values)
print("train root mean squared error: ",mean_squared_error(pt_inverse_labels_train_values,y_train_pred_reg,squared=False))
y_test_pred_reg=reg_model.predict(muon_hits_x_test_values)
print("test root mean squared error: ",mean_squared_error(pt_inverse_labels_test_values,y_test_pred_reg,squared=False))
train root mean squared error: 0.046178654 test root mean squared error: 0.047247976
That is impressive! the root mean square error on the testing data is relatively small in comparison to the data!
Future Directions¶
- We discovered in this project that the classifier model was not very good in detecting minority classes. Therefore we need to look for more techniques to make the model less biased (such as up sampling minority classes or down sampling the majority class).
- We can further improve the performance of our regression model by hyperparameters tuning through grid search.