batch_norm_params = { # Decay for the moving averages. 'decay': batch_norm_decay, # epsilon to prevent 0s in variance. 'epsilon': batch_norm_epsilon, # collection containing update_ops. 'updates_collections': tf.GraphKeys.UPDATE_OPS, } slim.arg_scope([slim.conv2d], normalizer_fn=slim.batch_norm, normalizer_params=normalizer_params)
For tf.contrib.layers or tf.slim, when is_training=True, mean and variance based on each batch are used and moving_mean and moving_variance are updated if applicable. When is_training=False, loaded moving-mean an moving_variance are used.
To launch the update of moving_mean and moving_variance, special attention needs to be paid because this update operation is detached from gradient descent, which can be realized in the following ways.
Otherwise, one can set updates_collections=None in slim.batch_norm to force the updates in place, but that can have a speed penalty, especially in distributed settings.
However, when trained on small-scale datasets, using moving_mean and moving_variance in the test stage often leads to extremely poor performance (close to random guess). This is due to the code start which renders moving_mean/variance unstable. There are two ways to fix the cold-start issue:
in the testing stage, also set is_training=True, i.e., use the mean and variance based on each test batch.
decrease batch_norm running average decay from default 0.999 to something like 0.99, which can speed up the start-up. When tuning decay, there is a trade-off between warm-up speed and statistical accuracy. For small-scale datasets, warm-up may take exceedingly long time, e.g., 300 epochs.
without slim: tf.nn.batch_normalization, no moving_mean/variance
1 2 3 4 5 6 7 8 9 10 11
defbatchnorm(bn_input): with tf.variable_scope("batchnorm"): # this block looks like it has 3 inputs on the graph unless we do this bn_input = tf.identity(bn_input) channels = bn_input.get_shape()[3] offset = tf.get_variable("offset", [channels], dtype=tf.float32, initializer=tf.zeros_initializer()) scale = tf.get_variable("scale", [channels], dtype=tf.float32, initializer=tf.random_normal_initializer(1.0, 0.02)) mean, variance = tf.nn.moments(bn_input, axes=[0, 1, 2], keep_dims=False) normalized = tf.nn.batch_normalization(bn_input, mean, variance, offset, scale, variance_epsilon=1e-5) return normalized
Utils
print all model variables
1 2 3 4 5
tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES) # all the global variables slim.get_model_variables() or tf.get_collection(tf.GraphKeys.MODEL_VARIABLES) #variables defined by slim (tf.contrib.framework.model_variable) #excluding gradient variables tf.trainable_variables() #excluding graident variables and batch_norm variables (moving_mean and moving_variance)
print regularization losses(weight decay) and other losses
A frame with menubar, toolbar, and statusbar: UpdateUIEvents are sent periodically by the framework during idle time to allow the application to check if the state of a control needs to be updated.
Video Player: First, you need to install MplayerCtrl lib. Secondly, place the mplayer folder under the current working directory.
Layout
wx.BoxSizer: proportion is used to control main direction and wx.EXPAND is used to control the other direction. Note in BoxSizer, alignment is only valid in one direction. AddSpacer(50) is equal to Add((50,50)). AddStretchSpacer() is equal to Add((0,0),proportion=1).
sizer = wx.BoxSizer(wx.HORIZONTAL)
sizer.AddSpacer(50)
sizer.Add(sth,proportion=0, flag=wx.ALL, border=5) #use flag to mark which side has border
sizer.Add((-1,10)) #add a black space, height=10
# sizer.Add(sth,proportion=0, wx.EXPAND|wx.RIGHT|wx.ALIGN_RIGHT, border=5)
sizer.AddSpacer((0,0)) #sizer.AddStretchSpacer()
self.SetSizer(sizer)
self.SetInitialSize()
wx.GridSizer: proportion is usually set as 0, use Add((20,20), 1, wx.EXPAND) to take up space.
Event Propagation: When an event can intrigue multiple events, use event.skip() to guaranttee the occurrence of following events. Take keyevents.py for an example.
Virtual Ride: wx.PyPannel
Bind function which will be checked in the idle time self.Bind(wx.EVT_UPDATE_UI, self.OnUpdateEditMenu)
The default HTTP request method for route() is get(), we can use other methods post(), put(), delete(), patch(). The POST method is commonly used for HTML form submission. When entering URL address, GET method is used.
Use redirect to jump to another page: bottle.redirect('/login')
Use the HTML template. In the template file, the lines starting with % are python codes and others are HTML codes. We can include other templates in the current template by using % include('header.tpl', title='Page Title'). Cookies, HTTP header, HTML <form> fields and other request data is available through the global request object.
When comparing the efficiency of different libraries, there may exist a few orders of magnitude difference. In the implementation in demand of high efficiency, locate the time-consuming function and replace it with the most efficient library function.
Text: Pandas Installation: pip install pandas or conda install pandas
1 2
import pandas as pd data = pd.read_csv(text_name, sep=',', header=None)
Image: Pillow-SIMD, skimage, OpenCV, imageio
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
import cv2 import skimage import imageio from PIL import Image
Pillow-SIMD is faster than Pillow, which is not reported here. OpenCV is the most efficient one here.
Video: OpenCV, skvideo, imageio
1 2 3 4 5 6 7 8 9 10 11 12 13 14
import cv2 import imageio import skvideo
#read 30fps video with each frame 1280x720 cap = cv2.VideoCapture(video_name) ret, frame = cap.read() #0.002s
vid = imageio.get_reader(video_name, 'ffmpeg') for image in vid.iter_data(): #0.004s
skvideo.setFFmpegPath(os.path.dirname(sys.executable)) videogen = skvideo.io.vreader(video_name) for img in videogen: #0.073s
For OpenCV in Anaconda, it sometimes fails in reading from video but succeeds in reading from camera. In this case, /usr/bin/python is recommended. imageio and OpenCV are comparable here.