2021-09-16:17:25:43,112 INFO [iwn.py:43] Loading bengali language synsets... 2021-09-16:17:25:56,34 WARNING [utils.py:429] using automatically assigned random_state=1570809799 2021-09-16:17:25:56,66 INFO [splitting.py:60] done splitting triples to groups of sizes [267045, 38830, 38831] /srv/home/bhattacharyya/anaconda3/lib/python3.7/site-packages/sklearn/ensemble/weight_boosting.py:29: DeprecationWarning: numpy.core.umath_tests is an internal NumPy module and should not be imported. It will be removed in a future NumPy release. from numpy.core.umath_tests import inner1d /srv/home/bhattacharyya/anaconda3/lib/python3.7/site-packages/sklearn/ensemble/gradient_boosting.py:34: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations from ._gradient_boosting import predict_stages /srv/home/bhattacharyya/anaconda3/lib/python3.7/site-packages/sklearn/ensemble/gradient_boosting.py:34: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations from ._gradient_boosting import predict_stages [I 2021-09-16 17:25:56,801] A new study created in memory with name: no-name-a93622a0-e597-4afe-b4b1-373c77259791 2021-09-16:17:25:56,846 INFO [hpo.py:622] Using model: 2021-09-16:17:25:56,847 INFO [hpo.py:626] Using loss: 2021-09-16:17:25:56,847 INFO [hpo.py:637] Using regularizer: 2021-09-16:17:25:56,847 INFO [hpo.py:641] Using optimizer: 2021-09-16:17:25:56,847 INFO [hpo.py:645] Using training loop: 2021-09-16:17:25:56,847 INFO [hpo.py:651] Using negative sampler: 2021-09-16:17:25:56,847 INFO [hpo.py:662] Using evaluator: 2021-09-16:17:25:56,847 INFO [hpo.py:666] Attempting to maximize adjusted_arithmetic_mean_rank_index 2021-09-16:17:25:56,847 INFO [hpo.py:668] Filter validation triples when testing: True /srv/home/bhattacharyya/anaconda3/lib/python3.7/site-packages/optuna/distributions.py:563: UserWarning: The distribution is specified by [32, 4000] and step=100, but the range is not divisible by `step`. It will be replaced by [32, 3932]. low=low, old_high=old_high, high=high, step=step 2021-09-16:17:25:56,851 WARNING [api.py:823] No random seed is specified. Setting to 325143478. Training epochs on cuda: 0%| | 0/10 [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [02:49<00:00, 44.64s/epoch, loss=0.00201, prev_loss=0.002] Training epochs on cuda: 100%|██████████| 10/10 [02:49<00:00, 16.92s/epoch, loss=0.00201, prev_loss=0.002] 2021-09-16:17:28:51,299 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpf0ls_5rl' 2021-09-16:17:28:51,422 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpf0ls_5rl' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [02:41<00:00, 44.11s/epoch, loss=0.000495, prev_loss=0.000493] Training epochs on cuda: 100%|██████████| 10/10 [02:41<00:00, 16.14s/epoch, loss=0.000495, prev_loss=0.000493] 2021-09-16:17:33:51,958 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp68potrz9' 2021-09-16:17:33:52,58 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp68potrz9' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [02:44<00:00, 44.66s/epoch, loss=0.000735, prev_loss=0.000732] Training epochs on cuda: 100%|██████████| 10/10 [02:44<00:00, 16.42s/epoch, loss=0.000735, prev_loss=0.000732] 2021-09-16:17:38:55,230 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpesz20md2' 2021-09-16:17:38:55,432 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpesz20md2' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [02:51<00:00, 45.22s/epoch, loss=0.00167, prev_loss=0.00166] Training epochs on cuda: 100%|██████████| 10/10 [02:51<00:00, 17.12s/epoch, loss=0.00167, prev_loss=0.00166] 2021-09-16:17:44:05,636 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpf8to_9dz' 2021-09-16:17:44:05,744 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpf8to_9dz' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:48<00:00, 50.66s/epoch, loss=0.000821, prev_loss=0.000817] Training epochs on cuda: 100%|██████████| 10/10 [03:48<00:00, 22.82s/epoch, loss=0.000821, prev_loss=0.000817] 2021-09-16:17:50:13,222 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp0fbxej8h' 2021-09-16:17:50:13,319 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp0fbxej8h' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [06:10<00:00, 64.74s/epoch, loss=0.0356, prev_loss=0.0356] Training epochs on cuda: 100%|██████████| 10/10 [06:10<00:00, 37.03s/epoch, loss=0.0356, prev_loss=0.0356] 2021-09-16:17:58:42,493 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpamendkre' 2021-09-16:17:58:42,603 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpamendkre' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [02:43<00:00, 44.39s/epoch, loss=0.000259, prev_loss=0.000258] Training epochs on cuda: 100%|██████████| 10/10 [02:43<00:00, 16.39s/epoch, loss=0.000259, prev_loss=0.000258] 2021-09-16:18:03:45,136 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpbgxmblje' 2021-09-16:18:03:45,259 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpbgxmblje' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [02:47<00:00, 44.78s/epoch, loss=0.000479, prev_loss=0.000774] Training epochs on cuda: 100%|██████████| 10/10 [02:47<00:00, 16.76s/epoch, loss=0.000479, prev_loss=0.000774] 2021-09-16:18:08:52,10 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpocu8dob_' 2021-09-16:18:08:52,232 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpocu8dob_' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [02:52<00:00, 45.11s/epoch, loss=0.00367, prev_loss=0.0039] Training epochs on cuda: 100%|██████████| 10/10 [02:52<00:00, 17.28s/epoch, loss=0.00367, prev_loss=0.0039] 2021-09-16:18:14:05,641 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpnjejlehs' 2021-09-16:18:14:05,761 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpnjejlehs' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:08<00:00, 46.75s/epoch, loss=0.00209, prev_loss=0.00209] Training epochs on cuda: 100%|██████████| 10/10 [03:08<00:00, 18.87s/epoch, loss=0.00209, prev_loss=0.00209] 2021-09-16:18:19:33,501 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpg1ld490j' 2021-09-16:18:19:33,599 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpg1ld490j' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [17:50<00:00, 135.04s/epoch, loss=1.85, prev_loss=1.85] Training epochs on cuda: 100%|██████████| 10/10 [17:50<00:00, 107.05s/epoch, loss=1.85, prev_loss=1.85] 2021-09-16:18:39:43,4 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpck5b6t1d' 2021-09-16:18:39:43,116 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpck5b6t1d' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [17:57<00:00, 135.53s/epoch, loss=4.58, prev_loss=4.53] Training epochs on cuda: 100%|██████████| 10/10 [17:57<00:00, 107.77s/epoch, loss=4.58, prev_loss=4.53] 2021-09-16:19:00:00,10 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpe0icfzj_' 2021-09-16:19:00:00,135 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpe0icfzj_' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [18:11<00:00, 137.46s/epoch, loss=4.05, prev_loss=4.03] Training epochs on cuda: 100%|██████████| 10/10 [18:11<00:00, 109.14s/epoch, loss=4.05, prev_loss=4.03] 2021-09-16:19:20:30,566 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpt3izt5hb' 2021-09-16:19:20:30,685 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpt3izt5hb' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:34<00:00, 49.28s/epoch, loss=0.00335, prev_loss=0.00393] Training epochs on cuda: 100%|██████████| 10/10 [03:34<00:00, 21.44s/epoch, loss=0.00335, prev_loss=0.00393] 2021-09-16:19:26:24,378 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp4r7az2tq' 2021-09-16:19:26:24,499 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp4r7az2tq' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:47<00:00, 50.74s/epoch, loss=0.00174, prev_loss=0.00172] Training epochs on cuda: 100%|██████████| 10/10 [03:47<00:00, 22.73s/epoch, loss=0.00174, prev_loss=0.00172] 2021-09-16:19:32:30,905 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpu8lbxp1e' 2021-09-16:19:32:31,26 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpu8lbxp1e' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:26<00:00, 48.48s/epoch, loss=0.00112, prev_loss=0.00111] Training epochs on cuda: 100%|██████████| 10/10 [03:26<00:00, 20.64s/epoch, loss=0.00112, prev_loss=0.00111] 2021-09-16:19:38:16,470 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpr90pkoup' 2021-09-16:19:38:16,577 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpr90pkoup' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:01<00:00, 52.06s/epoch, loss=0.000868, prev_loss=0.000865] Training epochs on cuda: 100%|██████████| 10/10 [04:01<00:00, 24.18s/epoch, loss=0.000868, prev_loss=0.000865] 2021-09-16:19:44:37,753 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpho3avl10' 2021-09-16:19:44:37,868 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpho3avl10' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:28<00:00, 48.55s/epoch, loss=0.00769, prev_loss=0.00767] Training epochs on cuda: 100%|██████████| 10/10 [03:28<00:00, 20.81s/epoch, loss=0.00769, prev_loss=0.00767] 2021-09-16:19:50:24,946 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpcfm6jh_o' 2021-09-16:19:50:25,53 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpcfm6jh_o' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:35<00:00, 49.71s/epoch, loss=0.00335, prev_loss=0.00334] Training epochs on cuda: 100%|██████████| 10/10 [03:35<00:00, 21.60s/epoch, loss=0.00335, prev_loss=0.00334] 2021-09-16:19:56:20,169 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpxp3y6wci' 2021-09-16:19:56:20,287 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpxp3y6wci' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [02:49<00:00, 44.94s/epoch, loss=0.00194, prev_loss=0.00193] Training epochs on cuda: 100%|██████████| 10/10 [02:49<00:00, 17.00s/epoch, loss=0.00194, prev_loss=0.00193] 2021-09-16:20:01:29,322 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpt6ou394i' 2021-09-16:20:01:29,444 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpt6ou394i' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:00<00:00, 46.07s/epoch, loss=0.00274, prev_loss=0.00273] Training epochs on cuda: 100%|██████████| 10/10 [03:00<00:00, 18.06s/epoch, loss=0.00274, prev_loss=0.00273] 2021-09-16:20:06:48,989 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp4q1refxm' 2021-09-16:20:06:49,117 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp4q1refxm' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [06:19<00:00, 65.91s/epoch, loss=0.117, prev_loss=0.281] Training epochs on cuda: 100%|██████████| 10/10 [06:19<00:00, 37.99s/epoch, loss=0.117, prev_loss=0.281] 2021-09-16:20:15:28,101 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpurg7alwh' 2021-09-16:20:15:28,230 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpurg7alwh' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [18:29<00:00, 139.42s/epoch, loss=8.6, prev_loss=8.26] Training epochs on cuda: 100%|██████████| 10/10 [18:29<00:00, 110.97s/epoch, loss=8.6, prev_loss=8.26] 2021-09-16:20:36:16,886 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpp35n57_6' 2021-09-16:20:36:17,18 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpp35n57_6' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:35<00:00, 49.39s/epoch, loss=0.0403, prev_loss=0.0475] Training epochs on cuda: 100%|██████████| 10/10 [03:35<00:00, 21.55s/epoch, loss=0.0403, prev_loss=0.0475] 2021-09-16:20:42:11,495 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp7znj46go' 2021-09-16:20:42:11,610 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp7znj46go' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:12<00:00, 47.31s/epoch, loss=0.00251, prev_loss=0.00248] Training epochs on cuda: 100%|██████████| 10/10 [03:12<00:00, 19.27s/epoch, loss=0.00251, prev_loss=0.00248] 2021-09-16:20:47:43,239 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpijtmeffc' 2021-09-16:20:47:43,351 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpijtmeffc' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:43<00:00, 50.51s/epoch, loss=0.00784, prev_loss=0.0119] Training epochs on cuda: 100%|██████████| 10/10 [03:43<00:00, 22.39s/epoch, loss=0.00784, prev_loss=0.0119] 2021-09-16:20:53:46,323 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpbofmzjwg' 2021-09-16:20:53:46,479 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpbofmzjwg' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:01<00:00, 52.12s/epoch, loss=0.00794, prev_loss=0.00795] Training epochs on cuda: 100%|██████████| 10/10 [04:01<00:00, 24.13s/epoch, loss=0.00794, prev_loss=0.00795] 2021-09-16:21:00:06,756 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpnr_qd52r' 2021-09-16:21:00:06,868 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpnr_qd52r' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:04<00:00, 46.14s/epoch, loss=0.00798, prev_loss=0.00773] Training epochs on cuda: 100%|██████████| 10/10 [03:04<00:00, 18.41s/epoch, loss=0.00798, prev_loss=0.00773] 2021-09-16:21:05:29,991 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpguuwgp8u' 2021-09-16:21:05:30,103 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpguuwgp8u' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:42<00:00, 50.28s/epoch, loss=0.00102, prev_loss=0.00101] Training epochs on cuda: 100%|██████████| 10/10 [03:42<00:00, 22.24s/epoch, loss=0.00102, prev_loss=0.00101] 2021-09-16:21:11:31,739 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpt80tjusw' 2021-09-16:21:11:31,858 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpt80tjusw' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:05<00:00, 46.53s/epoch, loss=0.000652, prev_loss=0.000647] Training epochs on cuda: 100%|██████████| 10/10 [03:05<00:00, 18.51s/epoch, loss=0.000652, prev_loss=0.000647] 2021-09-16:21:16:55,973 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp8i9bre6s' 2021-09-16:21:16:56,82 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp8i9bre6s' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:08<00:00, 46.91s/epoch, loss=0.000456, prev_loss=0.000453] Training epochs on cuda: 100%|██████████| 10/10 [03:08<00:00, 18.85s/epoch, loss=0.000456, prev_loss=0.000453] 2021-09-16:21:22:23,705 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp2fhpyk94' 2021-09-16:21:22:23,841 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp2fhpyk94' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:57<00:00, 51.69s/epoch, loss=0.045, prev_loss=0.0488] Training epochs on cuda: 100%|██████████| 10/10 [03:57<00:00, 23.72s/epoch, loss=0.045, prev_loss=0.0488] 2021-09-16:21:28:40,86 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmppltpsgku' 2021-09-16:21:28:40,195 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmppltpsgku' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [02:54<00:00, 45.42s/epoch, loss=0.00482, prev_loss=0.00466] Training epochs on cuda: 100%|██████████| 10/10 [02:54<00:00, 17.48s/epoch, loss=0.00482, prev_loss=0.00466] 2021-09-16:21:33:54,1 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmprbq4l82j' 2021-09-16:21:33:54,126 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmprbq4l82j' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [02:50<00:00, 45.03s/epoch, loss=0.00333, prev_loss=0.00324] Training epochs on cuda: 100%|██████████| 10/10 [02:50<00:00, 17.07s/epoch, loss=0.00333, prev_loss=0.00324] 2021-09-16:21:39:03,739 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp9l0pg4th' 2021-09-16:21:39:03,860 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp9l0pg4th' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [02:52<00:00, 45.13s/epoch, loss=0.00173, prev_loss=0.00171] Training epochs on cuda: 100%|██████████| 10/10 [02:52<00:00, 17.23s/epoch, loss=0.00173, prev_loss=0.00171] 2021-09-16:21:44:15,151 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpeuvy6ecj' 2021-09-16:21:44:15,254 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpeuvy6ecj' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [02:48<00:00, 44.90s/epoch, loss=0.000717, prev_loss=0.000711] Training epochs on cuda: 100%|██████████| 10/10 [02:48<00:00, 16.89s/epoch, loss=0.000717, prev_loss=0.000711] 2021-09-16:21:49:23,219 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpoz0ulc5s' 2021-09-16:21:49:23,325 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpoz0ulc5s' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [02:50<00:00, 45.18s/epoch, loss=0.00375, prev_loss=0.00373] Training epochs on cuda: 100%|██████████| 10/10 [02:50<00:00, 17.05s/epoch, loss=0.00375, prev_loss=0.00373] 2021-09-16:21:54:32,844 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpcwpaucsz' 2021-09-16:21:54:32,962 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpcwpaucsz' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [02:58<00:00, 45.85s/epoch, loss=0.00085, prev_loss=0.000843] Training epochs on cuda: 100%|██████████| 10/10 [02:58<00:00, 17.90s/epoch, loss=0.00085, prev_loss=0.000843] 2021-09-16:21:59:50,899 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpllcsmoey' 2021-09-16:21:59:51,6 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpllcsmoey' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [02:43<00:00, 44.25s/epoch, loss=0.000703, prev_loss=0.000698] Training epochs on cuda: 100%|██████████| 10/10 [02:43<00:00, 16.34s/epoch, loss=0.000703, prev_loss=0.000698] 2021-09-16:22:04:53,344 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpfqjur7xz' 2021-09-16:22:04:53,492 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpfqjur7xz' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [02:48<00:00, 44.76s/epoch, loss=0.000682, prev_loss=0.000676] Training epochs on cuda: 100%|██████████| 10/10 [02:48<00:00, 16.85s/epoch, loss=0.000682, prev_loss=0.000676] 2021-09-16:22:10:01,366 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpxkkx9r8j' 2021-09-16:22:10:01,487 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpxkkx9r8j' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [02:52<00:00, 45.19s/epoch, loss=0.000392, prev_loss=0.000387] Training epochs on cuda: 100%|██████████| 10/10 [02:52<00:00, 17.20s/epoch, loss=0.000392, prev_loss=0.000387] 2021-09-16:22:15:12,732 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpv_vca5z8' 2021-09-16:22:15:12,839 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpv_vca5z8' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:04<00:00, 46.45s/epoch, loss=0.00821, prev_loss=0.00805] Training epochs on cuda: 100%|██████████| 10/10 [03:04<00:00, 18.45s/epoch, loss=0.00821, prev_loss=0.00805] 2021-09-16:22:20:36,469 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp9q2itz_x' 2021-09-16:22:20:36,579 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp9q2itz_x' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:36<00:00, 55.74s/epoch, loss=0.14, prev_loss=0.135] Training epochs on cuda: 100%|██████████| 10/10 [04:36<00:00, 27.68s/epoch, loss=0.14, prev_loss=0.135] 2021-09-16:22:27:32,606 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpx76ow0zi' 2021-09-16:22:27:32,715 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpx76ow0zi' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [02:54<00:00, 45.29s/epoch, loss=0.0057, prev_loss=0.00552] Training epochs on cuda: 100%|██████████| 10/10 [02:54<00:00, 17.47s/epoch, loss=0.0057, prev_loss=0.00552] 2021-09-16:22:32:46,479 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpfpodzhhe' 2021-09-16:22:32:46,584 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpfpodzhhe' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [02:48<00:00, 44.86s/epoch, loss=0.00112, prev_loss=0.00111] Training epochs on cuda: 100%|██████████| 10/10 [02:48<00:00, 16.82s/epoch, loss=0.00112, prev_loss=0.00111] 2021-09-16:22:37:53,761 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpha87i0ij' 2021-09-16:22:37:53,883 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpha87i0ij' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:43<00:00, 50.44s/epoch, loss=0.00126, prev_loss=0.00126] Training epochs on cuda: 100%|██████████| 10/10 [03:43<00:00, 22.32s/epoch, loss=0.00126, prev_loss=0.00126] 2021-09-16:22:43:56,342 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpn3axinl9' 2021-09-16:22:43:56,445 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpn3axinl9' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:26<00:00, 48.76s/epoch, loss=0.00341, prev_loss=0.00342] Training epochs on cuda: 100%|██████████| 10/10 [03:26<00:00, 20.69s/epoch, loss=0.00341, prev_loss=0.00342] 2021-09-16:22:49:42,563 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpnxd7hpes' 2021-09-16:22:49:42,753 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpnxd7hpes' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [02:52<00:00, 45.28s/epoch, loss=0.000608, prev_loss=0.00061] Training epochs on cuda: 100%|██████████| 10/10 [02:52<00:00, 17.24s/epoch, loss=0.000608, prev_loss=0.00061] 2021-09-16:22:54:54,47 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpi96yalsg' 2021-09-16:22:54:54,184 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpi96yalsg' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:58<00:00, 51.96s/epoch, loss=0.000594, prev_loss=0.00059] Training epochs on cuda: 100%|██████████| 10/10 [03:58<00:00, 23.89s/epoch, loss=0.000594, prev_loss=0.00059] 2021-09-16:23:01:12,440 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmputt19e02' 2021-09-16:23:01:12,580 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmputt19e02' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:07<00:00, 46.94s/epoch, loss=0.0126, prev_loss=0.0126] Training epochs on cuda: 100%|██████████| 10/10 [03:07<00:00, 18.80s/epoch, loss=0.0126, prev_loss=0.0126] 2021-09-16:23:06:39,562 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpyrvm9ztn' 2021-09-16:23:06:39,714 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpyrvm9ztn' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [02:52<00:00, 45.37s/epoch, loss=0.000559, prev_loss=0.000558] Training epochs on cuda: 100%|██████████| 10/10 [02:52<00:00, 17.28s/epoch, loss=0.000559, prev_loss=0.000558] 2021-09-16:23:11:51,615 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpfznqin4w' 2021-09-16:23:11:51,741 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpfznqin4w' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:02<00:00, 46.38s/epoch, loss=0.00666, prev_loss=0.00655] Training epochs on cuda: 100%|██████████| 10/10 [03:02<00:00, 18.22s/epoch, loss=0.00666, prev_loss=0.00655] 2021-09-16:23:17:12,920 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpbpokgbay' 2021-09-16:23:17:13,40 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpbpokgbay' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [02:43<00:00, 44.52s/epoch, loss=0.000204, prev_loss=0.000218] Training epochs on cuda: 100%|██████████| 10/10 [02:43<00:00, 16.37s/epoch, loss=0.000204, prev_loss=0.000218] 2021-09-16:23:22:15,817 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp03tespye' 2021-09-16:23:22:15,938 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp03tespye' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:17<00:00, 47.46s/epoch, loss=0.00234, prev_loss=0.00232] Training epochs on cuda: 100%|██████████| 10/10 [03:17<00:00, 19.74s/epoch, loss=0.00234, prev_loss=0.00232] 2021-09-16:23:27:52,513 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp5mfjqxge' 2021-09-16:23:27:52,621 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp5mfjqxge' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:51<00:00, 56.95s/epoch, loss=0.0134, prev_loss=0.0134] Training epochs on cuda: 100%|██████████| 10/10 [04:51<00:00, 29.19s/epoch, loss=0.0134, prev_loss=0.0134] 2021-09-16:23:35:03,727 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp19fmr10n' 2021-09-16:23:35:03,853 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp19fmr10n' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:04<00:00, 46.39s/epoch, loss=0.000886, prev_loss=0.000875] Training epochs on cuda: 100%|██████████| 10/10 [03:04<00:00, 18.46s/epoch, loss=0.000886, prev_loss=0.000875] 2021-09-16:23:40:27,330 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpq8mhlfah' 2021-09-16:23:40:27,445 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpq8mhlfah' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:05<00:00, 46.52s/epoch, loss=0.00517, prev_loss=0.00515] Training epochs on cuda: 100%|██████████| 10/10 [03:05<00:00, 18.54s/epoch, loss=0.00517, prev_loss=0.00515] 2021-09-16:23:45:51,743 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp6eryx1gk' 2021-09-16:23:45:51,861 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp6eryx1gk' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [02:42<00:00, 44.46s/epoch, loss=0.00026, prev_loss=0.000258] Training epochs on cuda: 100%|██████████| 10/10 [02:42<00:00, 16.28s/epoch, loss=0.00026, prev_loss=0.000258] 2021-09-16:23:50:53,870 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmps9g6x5c9' 2021-09-16:23:50:53,997 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmps9g6x5c9' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [18:40<00:00, 140.30s/epoch, loss=0.193, prev_loss=0.193] Training epochs on cuda: 100%|██████████| 10/10 [18:40<00:00, 112.02s/epoch, loss=0.193, prev_loss=0.193] 2021-09-17:00:11:53,404 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpebnxo8zz' 2021-09-17:00:11:53,511 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpebnxo8zz' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:19<00:00, 48.13s/epoch, loss=0.00341, prev_loss=0.00339] Training epochs on cuda: 100%|██████████| 10/10 [03:19<00:00, 19.96s/epoch, loss=0.00341, prev_loss=0.00339] 2021-09-17:00:17:32,255 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp0iixk16_' 2021-09-17:00:17:32,381 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp0iixk16_' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:28<00:00, 48.93s/epoch, loss=0.000516, prev_loss=0.000511] Training epochs on cuda: 100%|██████████| 10/10 [03:28<00:00, 20.83s/epoch, loss=0.000516, prev_loss=0.000511] 2021-09-17:00:23:19,903 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpvi6c7vaf' 2021-09-17:00:23:20,21 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpvi6c7vaf' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [02:56<00:00, 45.74s/epoch, loss=0.00796, prev_loss=0.00767] Training epochs on cuda: 100%|██████████| 10/10 [02:56<00:00, 17.68s/epoch, loss=0.00796, prev_loss=0.00767] 2021-09-17:00:28:35,794 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp_ovhlj7q' 2021-09-17:00:28:35,927 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp_ovhlj7q' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:01<00:00, 46.03s/epoch, loss=0.00456, prev_loss=0.00462] Training epochs on cuda: 100%|██████████| 10/10 [03:01<00:00, 18.16s/epoch, loss=0.00456, prev_loss=0.00462] 2021-09-17:00:33:56,494 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp_lsow823' 2021-09-17:00:33:56,643 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp_lsow823' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:29<00:00, 49.25s/epoch, loss=0.0424, prev_loss=0.0413] Training epochs on cuda: 100%|██████████| 10/10 [03:29<00:00, 20.94s/epoch, loss=0.0424, prev_loss=0.0413] 2021-09-17:00:39:45,192 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpk080jftm' 2021-09-17:00:39:45,325 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpk080jftm' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:35<00:00, 49.57s/epoch, loss=0.0322, prev_loss=0.0314] Training epochs on cuda: 100%|██████████| 10/10 [03:35<00:00, 21.54s/epoch, loss=0.0322, prev_loss=0.0314] 2021-09-17:00:45:39,901 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpvbddtyi6' 2021-09-17:00:45:40,17 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpvbddtyi6' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:33<00:00, 49.37s/epoch, loss=0.0518, prev_loss=0.0654] Training epochs on cuda: 100%|██████████| 10/10 [03:33<00:00, 21.37s/epoch, loss=0.0518, prev_loss=0.0654] 2021-09-17:00:51:32,905 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpc7out8_z' 2021-09-17:00:51:33,23 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpc7out8_z' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:30<00:00, 54.77s/epoch, loss=0.0881, prev_loss=0.0846] Training epochs on cuda: 100%|██████████| 10/10 [04:30<00:00, 27.10s/epoch, loss=0.0881, prev_loss=0.0846] 2021-09-17:00:58:23,162 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpomlm7d_l' 2021-09-17:00:58:23,295 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpomlm7d_l' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:28<00:00, 54.52s/epoch, loss=0.0902, prev_loss=0.0873] Training epochs on cuda: 100%|██████████| 10/10 [04:28<00:00, 26.88s/epoch, loss=0.0902, prev_loss=0.0873] 2021-09-17:01:05:11,101 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpzv5blm7e' 2021-09-17:01:05:11,207 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpzv5blm7e' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:28<00:00, 54.70s/epoch, loss=0.0528, prev_loss=0.0522] Training epochs on cuda: 100%|██████████| 10/10 [04:28<00:00, 26.87s/epoch, loss=0.0528, prev_loss=0.0522] 2021-09-17:01:11:58,643 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpezw514vm' 2021-09-17:01:11:58,766 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpezw514vm' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:33<00:00, 49.29s/epoch, loss=0.0105, prev_loss=0.0104] Training epochs on cuda: 100%|██████████| 10/10 [03:33<00:00, 21.37s/epoch, loss=0.0105, prev_loss=0.0104] 2021-09-17:01:17:51,451 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpoheshhj8' 2021-09-17:01:17:51,557 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpoheshhj8' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [06:16<00:00, 65.60s/epoch, loss=0.0251, prev_loss=0.0284] Training epochs on cuda: 100%|██████████| 10/10 [06:16<00:00, 37.66s/epoch, loss=0.0251, prev_loss=0.0284] 2021-09-17:01:26:27,547 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp0w3lelyr' 2021-09-17:01:26:27,650 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp0w3lelyr' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:31<00:00, 55.33s/epoch, loss=0.00263, prev_loss=0.00263] Training epochs on cuda: 100%|██████████| 10/10 [04:31<00:00, 27.19s/epoch, loss=0.00263, prev_loss=0.00263] 2021-09-17:01:33:18,748 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp7j5fg3wl' 2021-09-17:01:33:18,880 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp7j5fg3wl' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:53<00:00, 51.57s/epoch, loss=0.0311, prev_loss=0.0402] Training epochs on cuda: 100%|██████████| 10/10 [03:53<00:00, 23.35s/epoch, loss=0.0311, prev_loss=0.0402] 2021-09-17:01:39:31,617 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpufe5lo7_' 2021-09-17:01:39:31,732 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpufe5lo7_' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:06<00:00, 46.34s/epoch, loss=0.0196, prev_loss=0.00772] Training epochs on cuda: 100%|██████████| 10/10 [03:06<00:00, 18.61s/epoch, loss=0.0196, prev_loss=0.00772] 2021-09-17:01:44:56,998 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpd77fnz2b' 2021-09-17:01:44:57,109 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpd77fnz2b' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:55<00:00, 51.66s/epoch, loss=0.0424, prev_loss=0.058] Training epochs on cuda: 100%|██████████| 10/10 [03:55<00:00, 23.54s/epoch, loss=0.0424, prev_loss=0.058] 2021-09-17:01:51:11,627 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp6juiqq40' 2021-09-17:01:51:11,768 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp6juiqq40' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:19<00:00, 47.84s/epoch, loss=0.00268, prev_loss=0.00266] Training epochs on cuda: 100%|██████████| 10/10 [03:19<00:00, 19.91s/epoch, loss=0.00268, prev_loss=0.00266] 2021-09-17:01:56:50,64 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp7isqfx02' 2021-09-17:01:56:50,188 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp7isqfx02' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [06:15<00:00, 65.42s/epoch, loss=0.475, prev_loss=0.455] Training epochs on cuda: 100%|██████████| 10/10 [06:15<00:00, 37.51s/epoch, loss=0.475, prev_loss=0.455] 2021-09-17:02:05:24,477 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpcc5v1oex' 2021-09-17:02:05:24,593 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpcc5v1oex' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [06:38<00:00, 68.14s/epoch, loss=0.017, prev_loss=0.0175] Training epochs on cuda: 100%|██████████| 10/10 [06:38<00:00, 39.89s/epoch, loss=0.017, prev_loss=0.0175] 2021-09-17:02:14:22,497 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpqc4dihce' 2021-09-17:02:14:22,608 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpqc4dihce' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:30<00:00, 55.00s/epoch, loss=0.0101, prev_loss=0.00988] Training epochs on cuda: 100%|██████████| 10/10 [04:30<00:00, 27.07s/epoch, loss=0.0101, prev_loss=0.00988] 2021-09-17:02:21:12,357 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpz68kdm81' 2021-09-17:02:21:12,464 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpz68kdm81' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [06:11<00:00, 64.69s/epoch, loss=0.00393, prev_loss=0.00626] Training epochs on cuda: 100%|██████████| 10/10 [06:11<00:00, 37.20s/epoch, loss=0.00393, prev_loss=0.00626] 2021-09-17:02:29:43,620 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpuwv1ngzy' 2021-09-17:02:29:43,722 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpuwv1ngzy' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [18:19<00:00, 138.65s/epoch, loss=0.0756, prev_loss=0.0756] Training epochs on cuda: 100%|██████████| 10/10 [18:19<00:00, 110.00s/epoch, loss=0.0756, prev_loss=0.0756] 2021-09-17:02:50:22,939 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpsgyp40bb' 2021-09-17:02:50:23,55 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpsgyp40bb' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:47<00:00, 56.62s/epoch, loss=0.00448, prev_loss=0.00448] Training epochs on cuda: 100%|██████████| 10/10 [04:47<00:00, 28.73s/epoch, loss=0.00448, prev_loss=0.00448] 2021-09-17:02:57:29,500 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpr_5kn5za' 2021-09-17:02:57:29,598 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpr_5kn5za' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [06:38<00:00, 67.97s/epoch, loss=0.00922, prev_loss=0.00921] Training epochs on cuda: 100%|██████████| 10/10 [06:38<00:00, 39.86s/epoch, loss=0.00922, prev_loss=0.00921] 2021-09-17:03:06:27,412 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpsyfwxza7' 2021-09-17:03:06:27,523 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpsyfwxza7' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [18:28<00:00, 138.44s/epoch, loss=0.06, prev_loss=0.06] Training epochs on cuda: 100%|██████████| 10/10 [18:28<00:00, 110.84s/epoch, loss=0.06, prev_loss=0.06] 2021-09-17:03:27:15,66 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmphn95mqxg' 2021-09-17:03:27:15,173 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmphn95mqxg' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:48<00:00, 56.79s/epoch, loss=0.00354, prev_loss=0.00354] Training epochs on cuda: 100%|██████████| 10/10 [04:48<00:00, 28.87s/epoch, loss=0.00354, prev_loss=0.00354] 2021-09-17:03:34:23,263 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp3235yxz3' 2021-09-17:03:34:23,390 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp3235yxz3' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:27<00:00, 48.81s/epoch, loss=0.000872, prev_loss=0.000865] Training epochs on cuda: 100%|██████████| 10/10 [03:27<00:00, 20.71s/epoch, loss=0.000872, prev_loss=0.000865] 2021-09-17:03:40:09,680 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpwddbfdg2' 2021-09-17:03:40:09,798 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpwddbfdg2' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:04<00:00, 52.48s/epoch, loss=0.00222, prev_loss=0.00222] Training epochs on cuda: 100%|██████████| 10/10 [04:04<00:00, 24.49s/epoch, loss=0.00222, prev_loss=0.00222] 2021-09-17:03:46:33,771 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp4c6lduif' 2021-09-17:03:46:33,876 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp4c6lduif' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [02:55<00:00, 45.55s/epoch, loss=0.000303, prev_loss=0.000303] Training epochs on cuda: 100%|██████████| 10/10 [02:55<00:00, 17.53s/epoch, loss=0.000303, prev_loss=0.000303] 2021-09-17:03:51:48,336 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpgw5fu1_s' 2021-09-17:03:51:48,456 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpgw5fu1_s' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:07<00:00, 46.93s/epoch, loss=0.000514, prev_loss=0.000514] Training epochs on cuda: 100%|██████████| 10/10 [03:07<00:00, 18.73s/epoch, loss=0.000514, prev_loss=0.000514] 2021-09-17:03:57:15,96 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpickl34dm' 2021-09-17:03:57:15,210 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpickl34dm' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [06:40<00:00, 68.16s/epoch, loss=0.0367, prev_loss=0.0368] Training epochs on cuda: 100%|██████████| 10/10 [06:40<00:00, 40.09s/epoch, loss=0.0367, prev_loss=0.0368] 2021-09-17:04:06:15,431 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpukrs_f2b' 2021-09-17:04:06:15,541 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpukrs_f2b' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:26<00:00, 48.72s/epoch, loss=0.00463, prev_loss=0.00463] Training epochs on cuda: 100%|██████████| 10/10 [03:26<00:00, 20.61s/epoch, loss=0.00463, prev_loss=0.00463] 2021-09-17:04:12:00,786 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp93s2d71u' 2021-09-17:04:12:00,901 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp93s2d71u' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:18<00:00, 47.84s/epoch, loss=0.0358, prev_loss=0.0347] Training epochs on cuda: 100%|██████████| 10/10 [03:18<00:00, 19.85s/epoch, loss=0.0358, prev_loss=0.0347] 2021-09-17:04:17:38,465 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpxf3dydzi' 2021-09-17:04:17:38,582 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpxf3dydzi' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:36<00:00, 49.62s/epoch, loss=0.0013, prev_loss=0.00129] Training epochs on cuda: 100%|██████████| 10/10 [03:36<00:00, 21.62s/epoch, loss=0.0013, prev_loss=0.00129] 2021-09-17:04:23:34,29 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmprgrc3x4i' 2021-09-17:04:23:34,148 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmprgrc3x4i' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:18<00:00, 47.74s/epoch, loss=0.0396, prev_loss=0.0435] Training epochs on cuda: 100%|██████████| 10/10 [03:18<00:00, 19.87s/epoch, loss=0.0396, prev_loss=0.0435] 2021-09-17:04:29:11,958 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp9cdbrgxq' 2021-09-17:04:29:12,79 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp9cdbrgxq' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:02<00:00, 52.00s/epoch, loss=0.00253, prev_loss=0.00253] Training epochs on cuda: 100%|██████████| 10/10 [04:02<00:00, 24.24s/epoch, loss=0.00253, prev_loss=0.00253] 2021-09-17:04:35:33,619 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpubqu12tb' 2021-09-17:04:35:33,724 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpubqu12tb' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [06:15<00:00, 65.26s/epoch, loss=0.0893, prev_loss=0.107] Training epochs on cuda: 100%|██████████| 10/10 [06:15<00:00, 37.51s/epoch, loss=0.0893, prev_loss=0.107] 2021-09-17:04:44:07,988 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp57xik9mw' 2021-09-17:04:44:08,83 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp57xik9mw' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:26<00:00, 48.82s/epoch, loss=0.00456, prev_loss=0.00457] Training epochs on cuda: 100%|██████████| 10/10 [03:26<00:00, 20.68s/epoch, loss=0.00456, prev_loss=0.00457] 2021-09-17:04:49:53,984 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp9sy_365t' 2021-09-17:04:49:54,87 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp9sy_365t' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:49<00:00, 51.14s/epoch, loss=0.000144, prev_loss=0.000144] Training epochs on cuda: 100%|██████████| 10/10 [03:49<00:00, 22.95s/epoch, loss=0.000144, prev_loss=0.000144] 2021-09-17:04:56:03,132 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmppputepq2' 2021-09-17:04:56:03,242 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmppputepq2' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:47<00:00, 50.60s/epoch, loss=0.000148, prev_loss=0.000148] Training epochs on cuda: 100%|██████████| 10/10 [03:47<00:00, 22.72s/epoch, loss=0.000148, prev_loss=0.000148] 2021-09-17:05:02:09,784 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpl76yfbto' 2021-09-17:05:02:09,902 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpl76yfbto' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:10<00:00, 53.05s/epoch, loss=0.000142, prev_loss=0.000142] Training epochs on cuda: 100%|██████████| 10/10 [04:10<00:00, 25.04s/epoch, loss=0.000142, prev_loss=0.000142] 2021-09-17:05:08:39,274 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpl5ozebnp' 2021-09-17:05:08:39,390 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpl5ozebnp' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:48<00:00, 50.91s/epoch, loss=0.000157, prev_loss=0.000157] Training epochs on cuda: 100%|██████████| 10/10 [03:48<00:00, 22.83s/epoch, loss=0.000157, prev_loss=0.000157] 2021-09-17:05:14:47,6 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpy_qo9vmu' 2021-09-17:05:14:47,128 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpy_qo9vmu' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:48<00:00, 50.87s/epoch, loss=0.000152, prev_loss=0.000151] Training epochs on cuda: 100%|██████████| 10/10 [03:48<00:00, 22.86s/epoch, loss=0.000152, prev_loss=0.000151] 2021-09-17:05:20:55,88 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp_8vg42qj' 2021-09-17:05:20:55,228 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp_8vg42qj' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:47<00:00, 50.73s/epoch, loss=0.000511, prev_loss=0.00051] Training epochs on cuda: 100%|██████████| 10/10 [03:47<00:00, 22.75s/epoch, loss=0.000511, prev_loss=0.00051] 2021-09-17:05:27:01,911 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp776h351y' 2021-09-17:05:27:02,43 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp776h351y' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:09<00:00, 53.00s/epoch, loss=0.000147, prev_loss=0.000147] Training epochs on cuda: 100%|██████████| 10/10 [04:09<00:00, 25.00s/epoch, loss=0.000147, prev_loss=0.000147] 2021-09-17:05:33:31,341 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp2lbyi_a4' 2021-09-17:05:33:31,458 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp2lbyi_a4' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:07<00:00, 52.84s/epoch, loss=0.000123, prev_loss=0.000123] Training epochs on cuda: 100%|██████████| 10/10 [04:07<00:00, 24.76s/epoch, loss=0.000123, prev_loss=0.000123] 2021-09-17:05:39:58,308 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmphnmcqh29' 2021-09-17:05:39:58,414 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmphnmcqh29' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:13<00:00, 53.52s/epoch, loss=0.000813, prev_loss=0.000808] Training epochs on cuda: 100%|██████████| 10/10 [04:13<00:00, 25.32s/epoch, loss=0.000813, prev_loss=0.000808] 2021-09-17:05:46:30,767 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpz_8udi5h' 2021-09-17:05:46:30,877 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpz_8udi5h' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:39<00:00, 50.07s/epoch, loss=0.000455, prev_loss=0.000453] Training epochs on cuda: 100%|██████████| 10/10 [03:39<00:00, 21.92s/epoch, loss=0.000455, prev_loss=0.000453] 2021-09-17:05:52:29,382 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpvwk6jttp' 2021-09-17:05:52:29,485 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpvwk6jttp' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:56<00:00, 51.73s/epoch, loss=0.000566, prev_loss=0.000564] Training epochs on cuda: 100%|██████████| 10/10 [03:56<00:00, 23.65s/epoch, loss=0.000566, prev_loss=0.000564] 2021-09-17:05:58:45,327 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpcdu4dc2y' 2021-09-17:05:58:45,441 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpcdu4dc2y' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:37<00:00, 49.91s/epoch, loss=0.000908, prev_loss=0.000902] Training epochs on cuda: 100%|██████████| 10/10 [03:37<00:00, 21.77s/epoch, loss=0.000908, prev_loss=0.000902] 2021-09-17:06:04:42,269 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp2_09h_6j' 2021-09-17:06:04:42,380 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp2_09h_6j' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:44<00:00, 50.64s/epoch, loss=0.000396, prev_loss=0.000395] Training epochs on cuda: 100%|██████████| 10/10 [03:44<00:00, 22.42s/epoch, loss=0.000396, prev_loss=0.000395] 2021-09-17:06:10:45,879 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpsi1iz5e6' 2021-09-17:06:10:46,20 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpsi1iz5e6' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:55<00:00, 51.79s/epoch, loss=0.000511, prev_loss=0.000509] Training epochs on cuda: 100%|██████████| 10/10 [03:55<00:00, 23.59s/epoch, loss=0.000511, prev_loss=0.000509] 2021-09-17:06:17:01,292 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp1_8yoxdd' 2021-09-17:06:17:01,411 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp1_8yoxdd' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:11<00:00, 53.27s/epoch, loss=0.000878, prev_loss=0.000872] Training epochs on cuda: 100%|██████████| 10/10 [04:11<00:00, 25.16s/epoch, loss=0.000878, prev_loss=0.000872] 2021-09-17:06:23:32,308 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpx1zks8in' 2021-09-17:06:23:32,410 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpx1zks8in' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:46<00:00, 50.73s/epoch, loss=0.000711, prev_loss=0.000709] Training epochs on cuda: 100%|██████████| 10/10 [03:46<00:00, 22.65s/epoch, loss=0.000711, prev_loss=0.000709] 2021-09-17:06:29:38,176 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpjttb3t43' 2021-09-17:06:29:38,299 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpjttb3t43' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:57<00:00, 51.86s/epoch, loss=0.000156, prev_loss=0.000156] Training epochs on cuda: 100%|██████████| 10/10 [03:57<00:00, 23.74s/epoch, loss=0.000156, prev_loss=0.000156] 2021-09-17:06:35:55,246 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmphe979kcw' 2021-09-17:06:35:55,358 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmphe979kcw' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:52<00:00, 51.27s/epoch, loss=0.000833, prev_loss=0.000829] Training epochs on cuda: 100%|██████████| 10/10 [03:52<00:00, 23.24s/epoch, loss=0.000833, prev_loss=0.000829] 2021-09-17:06:42:07,420 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpq9y02fyw' 2021-09-17:06:42:07,532 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpq9y02fyw' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:13<00:00, 53.32s/epoch, loss=0.00085, prev_loss=0.000844] Training epochs on cuda: 100%|██████████| 10/10 [04:13<00:00, 25.34s/epoch, loss=0.00085, prev_loss=0.000844] 2021-09-17:06:48:40,22 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpvdgf8pvt' 2021-09-17:06:48:40,135 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpvdgf8pvt' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:31<00:00, 49.07s/epoch, loss=0.000564, prev_loss=0.00056] Training epochs on cuda: 100%|██████████| 10/10 [03:31<00:00, 21.18s/epoch, loss=0.000564, prev_loss=0.00056] 2021-09-17:06:54:31,259 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpbzbrujik' 2021-09-17:06:54:31,386 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpbzbrujik' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:40<00:00, 49.88s/epoch, loss=0.000659, prev_loss=0.000654] Training epochs on cuda: 100%|██████████| 10/10 [03:40<00:00, 22.01s/epoch, loss=0.000659, prev_loss=0.000654] 2021-09-17:07:00:30,495 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpkv0ccgsu' 2021-09-17:07:00:30,619 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpkv0ccgsu' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:29<00:00, 48.75s/epoch, loss=0.000172, prev_loss=0.000172] Training epochs on cuda: 100%|██████████| 10/10 [03:29<00:00, 20.94s/epoch, loss=0.000172, prev_loss=0.000172] 2021-09-17:07:06:19,133 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpu_50228v' 2021-09-17:07:06:19,249 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpu_50228v' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:28<00:00, 48.84s/epoch, loss=0.000176, prev_loss=0.000176] Training epochs on cuda: 100%|██████████| 10/10 [03:28<00:00, 20.87s/epoch, loss=0.000176, prev_loss=0.000176] 2021-09-17:07:12:06,218 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpd_lwo2rd' 2021-09-17:07:12:06,313 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpd_lwo2rd' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:28<00:00, 48.79s/epoch, loss=0.000848, prev_loss=0.000841] Training epochs on cuda: 100%|██████████| 10/10 [03:28<00:00, 20.81s/epoch, loss=0.000848, prev_loss=0.000841] 2021-09-17:07:17:53,354 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp50lge4fb' 2021-09-17:07:17:53,460 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp50lge4fb' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:02<00:00, 52.08s/epoch, loss=0.000685, prev_loss=0.00068] Training epochs on cuda: 100%|██████████| 10/10 [04:02<00:00, 24.22s/epoch, loss=0.000685, prev_loss=0.00068] 2021-09-17:07:24:14,822 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpbk69l0u6' 2021-09-17:07:24:14,947 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpbk69l0u6' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:40<00:00, 50.14s/epoch, loss=0.000192, prev_loss=0.000191] Training epochs on cuda: 100%|██████████| 10/10 [03:40<00:00, 22.09s/epoch, loss=0.000192, prev_loss=0.000191] 2021-09-17:07:30:14,867 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpndt5qnm0' 2021-09-17:07:30:14,999 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpndt5qnm0' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:36<00:00, 49.70s/epoch, loss=0.000172, prev_loss=0.000172] Training epochs on cuda: 100%|██████████| 10/10 [03:36<00:00, 21.70s/epoch, loss=0.000172, prev_loss=0.000172] 2021-09-17:07:36:11,112 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp7x84sqj2' 2021-09-17:07:36:11,211 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp7x84sqj2' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:30<00:00, 48.95s/epoch, loss=0.000147, prev_loss=0.000147] Training epochs on cuda: 100%|██████████| 10/10 [03:30<00:00, 21.02s/epoch, loss=0.000147, prev_loss=0.000147] 2021-09-17:07:42:00,530 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp9h3ok96d' 2021-09-17:07:42:00,645 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp9h3ok96d' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:37<00:00, 49.61s/epoch, loss=0.0012, prev_loss=0.00119] Training epochs on cuda: 100%|██████████| 10/10 [03:37<00:00, 21.71s/epoch, loss=0.0012, prev_loss=0.00119] 2021-09-17:07:47:56,687 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpbwyezes2' 2021-09-17:07:47:56,808 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpbwyezes2' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:39<00:00, 49.96s/epoch, loss=0.000837, prev_loss=0.00083] Training epochs on cuda: 100%|██████████| 10/10 [03:39<00:00, 21.98s/epoch, loss=0.000837, prev_loss=0.00083] 2021-09-17:07:53:55,741 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp2v0chawv' 2021-09-17:07:53:55,854 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp2v0chawv' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:44<00:00, 50.62s/epoch, loss=0.00104, prev_loss=0.00103] Training epochs on cuda: 100%|██████████| 10/10 [03:44<00:00, 22.47s/epoch, loss=0.00104, prev_loss=0.00103] 2021-09-17:07:59:59,571 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp3c7i97g1' 2021-09-17:07:59:59,704 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp3c7i97g1' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:19<00:00, 48.03s/epoch, loss=0.000558, prev_loss=0.000555] Training epochs on cuda: 100%|██████████| 10/10 [03:19<00:00, 19.98s/epoch, loss=0.000558, prev_loss=0.000555] 2021-09-17:08:05:38,680 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpho528noy' 2021-09-17:08:05:38,810 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpho528noy' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:33<00:00, 49.46s/epoch, loss=0.000796, prev_loss=0.00079] Training epochs on cuda: 100%|██████████| 10/10 [03:33<00:00, 21.37s/epoch, loss=0.000796, prev_loss=0.00079] 2021-09-17:08:11:31,674 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpuvf6xreu' 2021-09-17:08:11:31,811 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpuvf6xreu' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:35<00:00, 49.55s/epoch, loss=0.0013, prev_loss=0.0013] Training epochs on cuda: 100%|██████████| 10/10 [03:35<00:00, 21.50s/epoch, loss=0.0013, prev_loss=0.0013] 2021-09-17:08:17:25,934 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpgstcag92' 2021-09-17:08:17:26,81 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpgstcag92' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:44<00:00, 50.54s/epoch, loss=0.000542, prev_loss=0.000541] Training epochs on cuda: 100%|██████████| 10/10 [03:44<00:00, 22.47s/epoch, loss=0.000542, prev_loss=0.000541] 2021-09-17:08:23:29,828 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpdm9piizj' 2021-09-17:08:23:29,938 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpdm9piizj' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:56<00:00, 51.69s/epoch, loss=0.000756, prev_loss=0.000753] Training epochs on cuda: 100%|██████████| 10/10 [03:56<00:00, 23.60s/epoch, loss=0.000756, prev_loss=0.000753] 2021-09-17:08:29:45,185 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpnqsmvxsy' 2021-09-17:08:29:45,293 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpnqsmvxsy' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:36<00:00, 49.59s/epoch, loss=0.000664, prev_loss=0.000659] Training epochs on cuda: 100%|██████████| 10/10 [03:36<00:00, 21.62s/epoch, loss=0.000664, prev_loss=0.000659] 2021-09-17:08:35:40,613 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpk1emaqq8' 2021-09-17:08:35:40,727 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpk1emaqq8' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:52<00:00, 51.34s/epoch, loss=0.000148, prev_loss=0.000148] Training epochs on cuda: 100%|██████████| 10/10 [03:52<00:00, 23.28s/epoch, loss=0.000148, prev_loss=0.000148] 2021-09-17:08:41:52,666 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpwcnf6d_v' 2021-09-17:08:41:52,766 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpwcnf6d_v' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:21<00:00, 48.14s/epoch, loss=0.000294, prev_loss=0.000293] Training epochs on cuda: 100%|██████████| 10/10 [03:21<00:00, 20.15s/epoch, loss=0.000294, prev_loss=0.000293] 2021-09-17:08:47:33,301 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpxqkdqsya' 2021-09-17:08:47:33,415 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpxqkdqsya' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:47<00:00, 50.70s/epoch, loss=0.000682, prev_loss=0.000683] Training epochs on cuda: 100%|██████████| 10/10 [03:47<00:00, 22.72s/epoch, loss=0.000682, prev_loss=0.000683] 2021-09-17:08:53:39,845 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpvqce_zlj' 2021-09-17:08:53:39,976 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpvqce_zlj' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:06<00:00, 52.65s/epoch, loss=0.000931, prev_loss=0.000926] Training epochs on cuda: 100%|██████████| 10/10 [04:06<00:00, 24.65s/epoch, loss=0.000931, prev_loss=0.000926] 2021-09-17:09:00:05,530 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpi2ibus05' 2021-09-17:09:00:05,649 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpi2ibus05' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:50<00:00, 50.87s/epoch, loss=0.00278, prev_loss=0.00277] Training epochs on cuda: 100%|██████████| 10/10 [03:50<00:00, 23.02s/epoch, loss=0.00278, prev_loss=0.00277] 2021-09-17:09:06:15,61 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp8nonhcqp' 2021-09-17:09:06:15,174 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp8nonhcqp' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:31<00:00, 48.29s/epoch, loss=0.000577, prev_loss=0.000572] Training epochs on cuda: 100%|██████████| 10/10 [03:31<00:00, 21.14s/epoch, loss=0.000577, prev_loss=0.000572] 2021-09-17:09:12:04,365 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp9gxxl5r6' 2021-09-17:09:12:04,480 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp9gxxl5r6' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:55<00:00, 51.50s/epoch, loss=0.00241, prev_loss=0.0024] Training epochs on cuda: 100%|██████████| 10/10 [03:55<00:00, 23.51s/epoch, loss=0.00241, prev_loss=0.0024] 2021-09-17:09:18:17,655 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpxlmplo0z' 2021-09-17:09:18:17,769 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpxlmplo0z' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:33<00:00, 49.35s/epoch, loss=0.000453, prev_loss=0.000451] Training epochs on cuda: 100%|██████████| 10/10 [03:33<00:00, 21.35s/epoch, loss=0.000453, prev_loss=0.000451] 2021-09-17:09:24:10,341 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmplt0x6o1j' 2021-09-17:09:24:10,453 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmplt0x6o1j' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:20<00:00, 47.98s/epoch, loss=0.00159, prev_loss=0.00176] Training epochs on cuda: 100%|██████████| 10/10 [03:20<00:00, 20.03s/epoch, loss=0.00159, prev_loss=0.00176] 2021-09-17:09:29:49,999 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpvaz6kxzt' 2021-09-17:09:29:50,131 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpvaz6kxzt' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:30<00:00, 49.08s/epoch, loss=0.00018, prev_loss=0.000181] Training epochs on cuda: 100%|██████████| 10/10 [03:30<00:00, 21.06s/epoch, loss=0.00018, prev_loss=0.000181] 2021-09-17:09:35:39,927 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpso92ov1w' 2021-09-17:09:35:40,62 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpso92ov1w' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:30<00:00, 49.04s/epoch, loss=0.000658, prev_loss=0.000654] Training epochs on cuda: 100%|██████████| 10/10 [03:30<00:00, 21.04s/epoch, loss=0.000658, prev_loss=0.000654] 2021-09-17:09:41:29,640 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpafdlw0fz' 2021-09-17:09:41:29,746 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpafdlw0fz' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:50<00:00, 51.09s/epoch, loss=0.000895, prev_loss=0.000888] Training epochs on cuda: 100%|██████████| 10/10 [03:50<00:00, 23.01s/epoch, loss=0.000895, prev_loss=0.000888] 2021-09-17:09:47:39,54 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmprcdyfiq5' 2021-09-17:09:47:39,159 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmprcdyfiq5' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:27<00:00, 48.80s/epoch, loss=0.000663, prev_loss=0.000658] Training epochs on cuda: 100%|██████████| 10/10 [03:27<00:00, 20.71s/epoch, loss=0.000663, prev_loss=0.000658] 2021-09-17:09:53:25,416 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpfezwgxa6' 2021-09-17:09:53:25,533 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpfezwgxa6' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:50<00:00, 51.11s/epoch, loss=0.00129, prev_loss=0.00128] Training epochs on cuda: 100%|██████████| 10/10 [03:50<00:00, 23.08s/epoch, loss=0.00129, prev_loss=0.00128] 2021-09-17:09:59:35,535 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp04w_77cz' 2021-09-17:09:59:35,653 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp04w_77cz' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:39<00:00, 50.02s/epoch, loss=0.000187, prev_loss=0.000186] Training epochs on cuda: 100%|██████████| 10/10 [03:39<00:00, 21.99s/epoch, loss=0.000187, prev_loss=0.000186] 2021-09-17:10:05:34,663 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpkmin01bt' 2021-09-17:10:05:34,761 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpkmin01bt' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:38<00:00, 49.85s/epoch, loss=0.00129, prev_loss=0.00128] Training epochs on cuda: 100%|██████████| 10/10 [03:38<00:00, 21.85s/epoch, loss=0.00129, prev_loss=0.00128] 2021-09-17:10:11:32,266 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpnofxm0fb' 2021-09-17:10:11:32,384 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpnofxm0fb' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:34<00:00, 49.31s/epoch, loss=0.000942, prev_loss=0.000933] Training epochs on cuda: 100%|██████████| 10/10 [03:34<00:00, 21.47s/epoch, loss=0.000942, prev_loss=0.000933] 2021-09-17:10:17:26,49 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpx0ga84y2' 2021-09-17:10:17:26,204 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpx0ga84y2' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:01<00:00, 52.21s/epoch, loss=0.000134, prev_loss=0.000134] Training epochs on cuda: 100%|██████████| 10/10 [04:01<00:00, 24.11s/epoch, loss=0.000134, prev_loss=0.000134] 2021-09-17:10:23:46,340 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp2pfz2j00' 2021-09-17:10:23:46,462 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp2pfz2j00' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:45<00:00, 50.63s/epoch, loss=0.000163, prev_loss=0.000162] Training epochs on cuda: 100%|██████████| 10/10 [03:45<00:00, 22.56s/epoch, loss=0.000163, prev_loss=0.000162] 2021-09-17:10:29:51,176 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpiing10_c' 2021-09-17:10:29:51,312 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpiing10_c' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:45<00:00, 50.56s/epoch, loss=0.000925, prev_loss=0.000918] Training epochs on cuda: 100%|██████████| 10/10 [03:45<00:00, 22.52s/epoch, loss=0.000925, prev_loss=0.000918] 2021-09-17:10:35:55,570 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp1o8bqj7j' 2021-09-17:10:35:55,697 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp1o8bqj7j' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:48<00:00, 50.85s/epoch, loss=0.000824, prev_loss=0.000817] Training epochs on cuda: 100%|██████████| 10/10 [03:48<00:00, 22.84s/epoch, loss=0.000824, prev_loss=0.000817] 2021-09-17:10:42:03,203 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpvz70kymr' 2021-09-17:10:42:03,313 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpvz70kymr' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:26<00:00, 48.69s/epoch, loss=0.000877, prev_loss=0.000871] Training epochs on cuda: 100%|██████████| 10/10 [03:26<00:00, 20.63s/epoch, loss=0.000877, prev_loss=0.000871] 2021-09-17:10:47:48,661 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpss4droh5' 2021-09-17:10:47:48,775 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpss4droh5' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:57<00:00, 51.81s/epoch, loss=0.000193, prev_loss=0.000193] Training epochs on cuda: 100%|██████████| 10/10 [03:57<00:00, 23.70s/epoch, loss=0.000193, prev_loss=0.000193] 2021-09-17:10:54:04,913 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmperq8iqfy' 2021-09-17:10:54:05,40 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmperq8iqfy' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:39<00:00, 49.76s/epoch, loss=0.000967, prev_loss=0.000959] Training epochs on cuda: 100%|██████████| 10/10 [03:39<00:00, 21.92s/epoch, loss=0.000967, prev_loss=0.000959] 2021-09-17:11:00:03,388 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpcg9qchf1' 2021-09-17:11:00:03,505 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpcg9qchf1' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:32<00:00, 49.02s/epoch, loss=8.03e-5, prev_loss=8.04e-5] Training epochs on cuda: 100%|██████████| 10/10 [03:32<00:00, 21.24s/epoch, loss=8.03e-5, prev_loss=8.04e-5] 2021-09-17:11:05:54,741 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp2oe3h12s' 2021-09-17:11:05:54,857 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp2oe3h12s' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:33<00:00, 49.34s/epoch, loss=0.000408, prev_loss=0.000405] Training epochs on cuda: 100%|██████████| 10/10 [03:33<00:00, 21.38s/epoch, loss=0.000408, prev_loss=0.000405] 2021-09-17:11:11:47,925 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpc633irt3' 2021-09-17:11:11:48,30 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpc633irt3' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:21<00:00, 48.11s/epoch, loss=0.000472, prev_loss=0.000468] Training epochs on cuda: 100%|██████████| 10/10 [03:21<00:00, 20.11s/epoch, loss=0.000472, prev_loss=0.000468] 2021-09-17:11:17:28,354 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpg1j5j2x6' 2021-09-17:11:17:28,475 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpg1j5j2x6' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:39<00:00, 49.98s/epoch, loss=0.000728, prev_loss=0.000723] Training epochs on cuda: 100%|██████████| 10/10 [03:39<00:00, 21.97s/epoch, loss=0.000728, prev_loss=0.000723] 2021-09-17:11:23:27,340 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpcz3qah8c' 2021-09-17:11:23:27,457 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpcz3qah8c' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:35<00:00, 49.58s/epoch, loss=0.000389, prev_loss=0.000386] Training epochs on cuda: 100%|██████████| 10/10 [03:35<00:00, 21.57s/epoch, loss=0.000389, prev_loss=0.000386] 2021-09-17:11:29:22,393 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpkug7mvea' 2021-09-17:11:29:22,510 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpkug7mvea' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:42<00:00, 50.17s/epoch, loss=7.68e-5, prev_loss=7.74e-5] Training epochs on cuda: 100%|██████████| 10/10 [03:42<00:00, 22.21s/epoch, loss=7.68e-5, prev_loss=7.74e-5] 2021-09-17:11:35:23,895 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmph5d9oj7v' 2021-09-17:11:35:24,18 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmph5d9oj7v' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:07<00:00, 46.79s/epoch, loss=0.000274, prev_loss=0.00028] Training epochs on cuda: 100%|██████████| 10/10 [03:07<00:00, 18.76s/epoch, loss=0.000274, prev_loss=0.00028] 2021-09-17:11:40:50,772 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpmupv4zex' 2021-09-17:11:40:50,876 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpmupv4zex' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:39<00:00, 49.99s/epoch, loss=0.000309, prev_loss=0.000308] Training epochs on cuda: 100%|██████████| 10/10 [03:39<00:00, 22.00s/epoch, loss=0.000309, prev_loss=0.000308] 2021-09-17:11:46:50,189 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpv4rehpj6' 2021-09-17:11:46:50,360 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpv4rehpj6' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:28<00:00, 48.90s/epoch, loss=0.000236, prev_loss=0.000235] Training epochs on cuda: 100%|██████████| 10/10 [03:28<00:00, 20.89s/epoch, loss=0.000236, prev_loss=0.000235] 2021-09-17:11:52:38,472 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpg5pa8ntj' 2021-09-17:11:52:38,587 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpg5pa8ntj' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:26<00:00, 48.60s/epoch, loss=9.45e-5, prev_loss=9.55e-5] Training epochs on cuda: 100%|██████████| 10/10 [03:26<00:00, 20.65s/epoch, loss=9.45e-5, prev_loss=9.55e-5] 2021-09-17:11:58:24,216 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp5ylws89y' 2021-09-17:11:58:24,336 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp5ylws89y' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:18<00:00, 47.87s/epoch, loss=0.00041, prev_loss=0.000409] Training epochs on cuda: 100%|██████████| 10/10 [03:18<00:00, 19.83s/epoch, loss=0.00041, prev_loss=0.000409] 2021-09-17:12:04:01,781 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpbnp0397z' 2021-09-17:12:04:01,904 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpbnp0397z' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:23<00:00, 48.38s/epoch, loss=0.000406, prev_loss=0.000404] Training epochs on cuda: 100%|██████████| 10/10 [03:23<00:00, 20.40s/epoch, loss=0.000406, prev_loss=0.000404] 2021-09-17:12:09:44,905 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpm73doons' 2021-09-17:12:09:45,28 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpm73doons' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:47<00:00, 50.57s/epoch, loss=0.000133, prev_loss=0.000133] Training epochs on cuda: 100%|██████████| 10/10 [03:47<00:00, 22.72s/epoch, loss=0.000133, prev_loss=0.000133] 2021-09-17:12:15:51,369 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpwv6_mof2' 2021-09-17:12:15:51,472 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpwv6_mof2' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:51<00:00, 51.12s/epoch, loss=0.00031, prev_loss=0.00031] Training epochs on cuda: 100%|██████████| 10/10 [03:51<00:00, 23.16s/epoch, loss=0.00031, prev_loss=0.00031] 2021-09-17:12:22:02,358 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp8_cs906y' 2021-09-17:12:22:02,460 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp8_cs906y' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:46<00:00, 50.65s/epoch, loss=8.5e-5, prev_loss=8.6e-5] Training epochs on cuda: 100%|██████████| 10/10 [03:46<00:00, 22.63s/epoch, loss=8.5e-5, prev_loss=8.6e-5] 2021-09-17:12:28:08,6 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpt8dlw579' 2021-09-17:12:28:08,109 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpt8dlw579' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:53<00:00, 51.51s/epoch, loss=6.16e-5, prev_loss=6.34e-5] Training epochs on cuda: 100%|██████████| 10/10 [03:53<00:00, 23.39s/epoch, loss=6.16e-5, prev_loss=6.34e-5] 2021-09-17:12:34:21,313 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpxz1ke52k' 2021-09-17:12:34:21,412 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpxz1ke52k' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:49<00:00, 51.04s/epoch, loss=0.000301, prev_loss=0.0003] Training epochs on cuda: 100%|██████████| 10/10 [03:49<00:00, 22.94s/epoch, loss=0.000301, prev_loss=0.0003] 2021-09-17:12:40:30,105 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpd2xz8e0x' 2021-09-17:12:40:30,208 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpd2xz8e0x' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:53<00:00, 51.41s/epoch, loss=0.000136, prev_loss=0.000136] Training epochs on cuda: 100%|██████████| 10/10 [03:53<00:00, 23.35s/epoch, loss=0.000136, prev_loss=0.000136] 2021-09-17:12:46:42,956 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpeq2jat4c' 2021-09-17:12:46:43,60 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpeq2jat4c' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:51<00:00, 51.18s/epoch, loss=0.000257, prev_loss=0.000256] Training epochs on cuda: 100%|██████████| 10/10 [03:51<00:00, 23.13s/epoch, loss=0.000257, prev_loss=0.000256] 2021-09-17:12:52:53,425 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpy0f9sdfo' 2021-09-17:12:52:53,548 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpy0f9sdfo' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:57<00:00, 51.91s/epoch, loss=0.000192, prev_loss=0.000192] Training epochs on cuda: 100%|██████████| 10/10 [03:57<00:00, 23.79s/epoch, loss=0.000192, prev_loss=0.000192] 2021-09-17:12:59:10,689 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp067oz6yr' 2021-09-17:12:59:10,854 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp067oz6yr' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:59<00:00, 52.13s/epoch, loss=0.0006, prev_loss=0.000598] Training epochs on cuda: 100%|██████████| 10/10 [03:59<00:00, 23.96s/epoch, loss=0.0006, prev_loss=0.000598] 2021-09-17:13:05:29,592 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp_hvcwrsg' 2021-09-17:13:05:29,693 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp_hvcwrsg' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:43<00:00, 50.55s/epoch, loss=0.00026, prev_loss=0.00026] Training epochs on cuda: 100%|██████████| 10/10 [03:43<00:00, 22.40s/epoch, loss=0.00026, prev_loss=0.00026] 2021-09-17:13:11:32,732 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpq3qqcqvw' 2021-09-17:13:11:32,840 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpq3qqcqvw' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:52<00:00, 51.33s/epoch, loss=0.000245, prev_loss=0.000244] Training epochs on cuda: 100%|██████████| 10/10 [03:52<00:00, 23.25s/epoch, loss=0.000245, prev_loss=0.000244] 2021-09-17:13:17:44,413 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpvf2e1k7z' 2021-09-17:13:17:44,527 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpvf2e1k7z' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:43<00:00, 50.29s/epoch, loss=6.81e-5, prev_loss=7.02e-5] Training epochs on cuda: 100%|██████████| 10/10 [03:43<00:00, 22.31s/epoch, loss=6.81e-5, prev_loss=7.02e-5] 2021-09-17:13:23:46,528 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpror0f0rc' 2021-09-17:13:23:46,638 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpror0f0rc' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:42<00:00, 50.35s/epoch, loss=0.000125, prev_loss=0.000125] Training epochs on cuda: 100%|██████████| 10/10 [03:42<00:00, 22.24s/epoch, loss=0.000125, prev_loss=0.000125] 2021-09-17:13:29:48,51 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp9abyesv_' 2021-09-17:13:29:48,159 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp9abyesv_' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:43<00:00, 50.38s/epoch, loss=0.000136, prev_loss=0.000136] Training epochs on cuda: 100%|██████████| 10/10 [03:43<00:00, 22.33s/epoch, loss=0.000136, prev_loss=0.000136] 2021-09-17:13:35:50,815 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpq0gwwapo' 2021-09-17:13:35:50,908 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpq0gwwapo' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:45<00:00, 50.58s/epoch, loss=0.000287, prev_loss=0.000286] Training epochs on cuda: 100%|██████████| 10/10 [03:45<00:00, 22.52s/epoch, loss=0.000287, prev_loss=0.000286] 2021-09-17:13:41:55,276 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpmomlcm0p' 2021-09-17:13:41:55,379 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpmomlcm0p' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:58<00:00, 52.04s/epoch, loss=0.000287, prev_loss=0.000286] Training epochs on cuda: 100%|██████████| 10/10 [03:58<00:00, 23.89s/epoch, loss=0.000287, prev_loss=0.000286] 2021-09-17:13:48:13,445 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpyyc1i4tc' 2021-09-17:13:48:13,554 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpyyc1i4tc' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:49<00:00, 51.07s/epoch, loss=0.000302, prev_loss=0.000301] Training epochs on cuda: 100%|██████████| 10/10 [03:49<00:00, 22.97s/epoch, loss=0.000302, prev_loss=0.000301] 2021-09-17:13:54:22,396 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp7jcghz7q' 2021-09-17:13:54:22,485 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp7jcghz7q' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:42<00:00, 50.34s/epoch, loss=0.000266, prev_loss=0.000266] Training epochs on cuda: 100%|██████████| 10/10 [03:42<00:00, 22.27s/epoch, loss=0.000266, prev_loss=0.000266] 2021-09-17:14:00:24,481 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpsot7ms8h' 2021-09-17:14:00:24,605 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpsot7ms8h' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:48<00:00, 50.95s/epoch, loss=0.000298, prev_loss=0.000297] Training epochs on cuda: 100%|██████████| 10/10 [03:48<00:00, 22.83s/epoch, loss=0.000298, prev_loss=0.000297] 2021-09-17:14:06:32,43 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpyt1mlrko' 2021-09-17:14:06:32,154 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpyt1mlrko' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:48<00:00, 50.69s/epoch, loss=0.000375, prev_loss=0.000374] Training epochs on cuda: 100%|██████████| 10/10 [03:48<00:00, 22.83s/epoch, loss=0.000375, prev_loss=0.000374] 2021-09-17:14:12:39,594 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpmtvf6a33' 2021-09-17:14:12:39,710 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpmtvf6a33' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:53<00:00, 51.56s/epoch, loss=0.000252, prev_loss=0.000251] Training epochs on cuda: 100%|██████████| 10/10 [03:53<00:00, 23.34s/epoch, loss=0.000252, prev_loss=0.000251] 2021-09-17:14:18:52,350 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpq4l55xy5' 2021-09-17:14:18:52,494 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpq4l55xy5' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:01<00:00, 52.07s/epoch, loss=6.4e-5, prev_loss=6.48e-5] Training epochs on cuda: 100%|██████████| 10/10 [04:01<00:00, 24.19s/epoch, loss=6.4e-5, prev_loss=6.48e-5] 2021-09-17:14:25:12,700 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpkgj642ax' 2021-09-17:14:25:12,805 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpkgj642ax' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:00<00:00, 51.89s/epoch, loss=0.00014, prev_loss=0.00014] Training epochs on cuda: 100%|██████████| 10/10 [04:00<00:00, 24.09s/epoch, loss=0.00014, prev_loss=0.00014] 2021-09-17:14:31:31,499 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpf43oqx99' 2021-09-17:14:31:31,617 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpf43oqx99' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:03<00:00, 52.76s/epoch, loss=0.0003, prev_loss=0.000299] Training epochs on cuda: 100%|██████████| 10/10 [04:03<00:00, 24.35s/epoch, loss=0.0003, prev_loss=0.000299] 2021-09-17:14:37:52,822 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp0005uzcb' 2021-09-17:14:37:53,104 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp0005uzcb' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:01<00:00, 52.34s/epoch, loss=0.000328, prev_loss=0.000327] Training epochs on cuda: 100%|██████████| 10/10 [04:01<00:00, 24.15s/epoch, loss=0.000328, prev_loss=0.000327] 2021-09-17:14:44:15,565 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp8llifw0y' 2021-09-17:14:44:15,695 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp8llifw0y' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:14<00:00, 53.51s/epoch, loss=0.000315, prev_loss=0.000314] Training epochs on cuda: 100%|██████████| 10/10 [04:14<00:00, 25.42s/epoch, loss=0.000315, prev_loss=0.000314] 2021-09-17:14:50:50,247 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp68n8lb33' 2021-09-17:14:50:50,413 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp68n8lb33' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:01<00:00, 52.36s/epoch, loss=0.000274, prev_loss=0.000274] Training epochs on cuda: 100%|██████████| 10/10 [04:01<00:00, 24.16s/epoch, loss=0.000274, prev_loss=0.000274] 2021-09-17:14:57:12,42 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpgloai64v' 2021-09-17:14:57:12,192 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpgloai64v' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:54<00:00, 51.60s/epoch, loss=0.000376, prev_loss=0.000375] Training epochs on cuda: 100%|██████████| 10/10 [03:54<00:00, 23.50s/epoch, loss=0.000376, prev_loss=0.000375] 2021-09-17:15:03:27,181 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp5oo1ig4c' 2021-09-17:15:03:27,309 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp5oo1ig4c' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:05<00:00, 52.79s/epoch, loss=0.000275, prev_loss=0.000275] Training epochs on cuda: 100%|██████████| 10/10 [04:05<00:00, 24.57s/epoch, loss=0.000275, prev_loss=0.000275] 2021-09-17:15:09:52,913 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpahufugrr' 2021-09-17:15:09:53,52 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpahufugrr' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:55<00:00, 51.65s/epoch, loss=0.000285, prev_loss=0.000284] Training epochs on cuda: 100%|██████████| 10/10 [03:55<00:00, 23.51s/epoch, loss=0.000285, prev_loss=0.000284] 2021-09-17:15:16:08,114 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpw5c09j_y' 2021-09-17:15:16:08,233 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpw5c09j_y' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:13<00:00, 53.57s/epoch, loss=7.05e-5, prev_loss=7.09e-5] Training epochs on cuda: 100%|██████████| 10/10 [04:13<00:00, 25.39s/epoch, loss=7.05e-5, prev_loss=7.09e-5] 2021-09-17:15:22:42,15 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmptopyfx70' 2021-09-17:15:22:42,159 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmptopyfx70' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:12<00:00, 53.42s/epoch, loss=0.000698, prev_loss=0.000698] Training epochs on cuda: 100%|██████████| 10/10 [04:12<00:00, 25.22s/epoch, loss=0.000698, prev_loss=0.000698] 2021-09-17:15:29:14,703 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp2na9yrff' 2021-09-17:15:29:14,893 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp2na9yrff' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:02<00:00, 52.37s/epoch, loss=7.38e-5, prev_loss=7.44e-5] Training epochs on cuda: 100%|██████████| 10/10 [04:02<00:00, 24.23s/epoch, loss=7.38e-5, prev_loss=7.44e-5] 2021-09-17:15:35:37,137 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpwyx6j9ds' 2021-09-17:15:35:37,267 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpwyx6j9ds' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:59<00:00, 52.03s/epoch, loss=0.000329, prev_loss=0.000328] Training epochs on cuda: 100%|██████████| 10/10 [03:59<00:00, 23.91s/epoch, loss=0.000329, prev_loss=0.000328] 2021-09-17:15:41:56,251 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmppxw5eaan' 2021-09-17:15:41:56,386 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmppxw5eaan' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:49<00:00, 51.21s/epoch, loss=0.000298, prev_loss=0.000297] Training epochs on cuda: 100%|██████████| 10/10 [03:49<00:00, 22.99s/epoch, loss=0.000298, prev_loss=0.000297] 2021-09-17:15:48:06,211 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpe6udypet' 2021-09-17:15:48:06,448 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpe6udypet' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:58<00:00, 51.98s/epoch, loss=6.99e-5, prev_loss=7.28e-5] Training epochs on cuda: 100%|██████████| 10/10 [03:58<00:00, 23.84s/epoch, loss=6.99e-5, prev_loss=7.28e-5] 2021-09-17:15:54:24,605 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpp7133nub' 2021-09-17:15:54:24,754 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpp7133nub' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:00<00:00, 52.23s/epoch, loss=6.56e-5, prev_loss=6.83e-5] Training epochs on cuda: 100%|██████████| 10/10 [04:00<00:00, 24.04s/epoch, loss=6.56e-5, prev_loss=6.83e-5] 2021-09-17:16:00:44,994 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpy71mq19p' 2021-09-17:16:00:45,168 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpy71mq19p' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:58<00:00, 52.04s/epoch, loss=0.00027, prev_loss=0.00027] Training epochs on cuda: 100%|██████████| 10/10 [03:58<00:00, 23.87s/epoch, loss=0.00027, prev_loss=0.00027] 2021-09-17:16:07:03,867 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp3ovag201' 2021-09-17:16:07:03,985 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp3ovag201' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:05<00:00, 52.75s/epoch, loss=0.000284, prev_loss=0.000284] Training epochs on cuda: 100%|██████████| 10/10 [04:05<00:00, 24.51s/epoch, loss=0.000284, prev_loss=0.000284] 2021-09-17:16:13:29,139 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpusjwt9gc' 2021-09-17:16:13:29,276 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpusjwt9gc' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:12<00:00, 53.50s/epoch, loss=6.15e-5, prev_loss=6.43e-5] Training epochs on cuda: 100%|██████████| 10/10 [04:12<00:00, 25.26s/epoch, loss=6.15e-5, prev_loss=6.43e-5] 2021-09-17:16:20:02,102 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp8mk7x893' 2021-09-17:16:20:02,239 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp8mk7x893' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:04<00:00, 52.77s/epoch, loss=0.000297, prev_loss=0.000296] Training epochs on cuda: 100%|██████████| 10/10 [04:04<00:00, 24.48s/epoch, loss=0.000297, prev_loss=0.000296] 2021-09-17:16:26:27,180 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp24qmy1y6' 2021-09-17:16:26:27,331 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp24qmy1y6' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:00<00:00, 52.24s/epoch, loss=7.22e-5, prev_loss=7.35e-5] Training epochs on cuda: 100%|██████████| 10/10 [04:00<00:00, 24.05s/epoch, loss=7.22e-5, prev_loss=7.35e-5] 2021-09-17:16:32:47,707 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp1yjtq373' 2021-09-17:16:32:47,866 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp1yjtq373' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:13<00:00, 53.68s/epoch, loss=0.000134, prev_loss=0.000134] Training epochs on cuda: 100%|██████████| 10/10 [04:13<00:00, 25.32s/epoch, loss=0.000134, prev_loss=0.000134] 2021-09-17:16:39:21,111 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpipfnxwtn' 2021-09-17:16:39:21,261 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpipfnxwtn' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:08<00:00, 53.03s/epoch, loss=6.51e-5, prev_loss=6.69e-5] Training epochs on cuda: 100%|██████████| 10/10 [04:08<00:00, 24.83s/epoch, loss=6.51e-5, prev_loss=6.69e-5] 2021-09-17:16:45:49,637 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpxl0ey4cm' 2021-09-17:16:45:49,815 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpxl0ey4cm' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:10<00:00, 53.11s/epoch, loss=0.000293, prev_loss=0.000292] Training epochs on cuda: 100%|██████████| 10/10 [04:10<00:00, 25.10s/epoch, loss=0.000293, prev_loss=0.000292] 2021-09-17:16:52:20,909 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpk_1bo3f6' 2021-09-17:16:52:21,65 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpk_1bo3f6' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:14<00:00, 53.52s/epoch, loss=0.000313, prev_loss=0.000312] Training epochs on cuda: 100%|██████████| 10/10 [04:14<00:00, 25.45s/epoch, loss=0.000313, prev_loss=0.000312] 2021-09-17:16:58:54,984 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp6q23g4kl' 2021-09-17:16:58:55,126 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp6q23g4kl' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:01<00:00, 52.17s/epoch, loss=0.000317, prev_loss=0.000316] Training epochs on cuda: 100%|██████████| 10/10 [04:01<00:00, 24.14s/epoch, loss=0.000317, prev_loss=0.000316] 2021-09-17:17:05:15,413 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpe1ynco_p' 2021-09-17:17:05:15,539 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpe1ynco_p' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:02<00:00, 52.21s/epoch, loss=0.000723, prev_loss=0.000723] Training epochs on cuda: 100%|██████████| 10/10 [04:02<00:00, 24.20s/epoch, loss=0.000723, prev_loss=0.000723] 2021-09-17:17:11:36,674 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpnuybjphh' 2021-09-17:17:11:36,783 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpnuybjphh' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:53<00:00, 51.38s/epoch, loss=0.000296, prev_loss=0.000295] Training epochs on cuda: 100%|██████████| 10/10 [03:53<00:00, 23.32s/epoch, loss=0.000296, prev_loss=0.000295] 2021-09-17:17:17:48,321 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmprt8fxa2s' 2021-09-17:17:17:48,507 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmprt8fxa2s' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:58<00:00, 51.89s/epoch, loss=6.84e-5, prev_loss=7.06e-5] Training epochs on cuda: 100%|██████████| 10/10 [03:58<00:00, 23.85s/epoch, loss=6.84e-5, prev_loss=7.06e-5] 2021-09-17:17:24:05,869 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp8q1r4ivq' 2021-09-17:17:24:06,28 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp8q1r4ivq' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:04<00:00, 52.50s/epoch, loss=0.000285, prev_loss=0.000284] Training epochs on cuda: 100%|██████████| 10/10 [04:04<00:00, 24.46s/epoch, loss=0.000285, prev_loss=0.000284] 2021-09-17:17:30:29,640 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp0kte5q_f' 2021-09-17:17:30:29,780 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp0kte5q_f' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:57<00:00, 51.80s/epoch, loss=6.6e-5, prev_loss=6.81e-5] Training epochs on cuda: 100%|██████████| 10/10 [03:57<00:00, 23.73s/epoch, loss=6.6e-5, prev_loss=6.81e-5] 2021-09-17:17:36:46,914 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp7po1df53' 2021-09-17:17:36:47,56 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp7po1df53' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:13<00:00, 53.85s/epoch, loss=7.14e-5, prev_loss=7.25e-5] Training epochs on cuda: 100%|██████████| 10/10 [04:13<00:00, 25.35s/epoch, loss=7.14e-5, prev_loss=7.25e-5] 2021-09-17:17:43:20,612 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp3qa3o2dc' 2021-09-17:17:43:20,779 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp3qa3o2dc' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:55<00:00, 51.51s/epoch, loss=7.15e-5, prev_loss=7.37e-5] Training epochs on cuda: 100%|██████████| 10/10 [03:55<00:00, 23.52s/epoch, loss=7.15e-5, prev_loss=7.37e-5] 2021-09-17:17:49:35,974 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpeqk27i8t' 2021-09-17:17:49:36,94 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpeqk27i8t' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:14<00:00, 53.59s/epoch, loss=7.13e-5, prev_loss=7.22e-5] Training epochs on cuda: 100%|██████████| 10/10 [04:14<00:00, 25.46s/epoch, loss=7.13e-5, prev_loss=7.22e-5] 2021-09-17:17:56:10,859 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpwc70yakf' 2021-09-17:17:56:11,22 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpwc70yakf' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:55<00:00, 51.64s/epoch, loss=8.27e-5, prev_loss=8.36e-5] Training epochs on cuda: 100%|██████████| 10/10 [03:55<00:00, 23.50s/epoch, loss=8.27e-5, prev_loss=8.36e-5] 2021-09-17:18:02:26,98 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpulep7324' 2021-09-17:18:02:26,258 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpulep7324' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:52<00:00, 51.48s/epoch, loss=6.75e-5, prev_loss=6.97e-5] Training epochs on cuda: 100%|██████████| 10/10 [03:52<00:00, 23.26s/epoch, loss=6.75e-5, prev_loss=6.97e-5] 2021-09-17:18:08:38,734 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpfw6t4mjn' 2021-09-17:18:08:38,915 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpfw6t4mjn' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:13<00:00, 53.54s/epoch, loss=7.86e-5, prev_loss=7.99e-5] Training epochs on cuda: 100%|██████████| 10/10 [04:13<00:00, 25.39s/epoch, loss=7.86e-5, prev_loss=7.99e-5] 2021-09-17:18:15:12,697 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpyuejwudh' 2021-09-17:18:15:12,863 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpyuejwudh' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:14<00:00, 53.60s/epoch, loss=7.1e-5, prev_loss=7.25e-5] Training epochs on cuda: 100%|██████████| 10/10 [04:14<00:00, 25.42s/epoch, loss=7.1e-5, prev_loss=7.25e-5] 2021-09-17:18:21:47,315 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpouruzrsx' 2021-09-17:18:21:47,456 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpouruzrsx' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:04<00:00, 52.56s/epoch, loss=6.78e-5, prev_loss=7.05e-5] Training epochs on cuda: 100%|██████████| 10/10 [04:04<00:00, 24.42s/epoch, loss=6.78e-5, prev_loss=7.05e-5] 2021-09-17:18:28:11,728 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpnwmz1pck' 2021-09-17:18:28:11,854 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpnwmz1pck' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:52<00:00, 51.47s/epoch, loss=7.67e-5, prev_loss=8.01e-5] Training epochs on cuda: 100%|██████████| 10/10 [03:52<00:00, 23.24s/epoch, loss=7.67e-5, prev_loss=8.01e-5] 2021-09-17:18:34:24,311 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpzdnqrvzb' 2021-09-17:18:34:24,483 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpzdnqrvzb' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:52<00:00, 51.36s/epoch, loss=7.9e-5, prev_loss=8.22e-5] Training epochs on cuda: 100%|██████████| 10/10 [03:52<00:00, 23.27s/epoch, loss=7.9e-5, prev_loss=8.22e-5] 2021-09-17:18:40:37,424 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmplznymmsc' 2021-09-17:18:40:37,586 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmplznymmsc' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:51<00:00, 51.22s/epoch, loss=7.18e-5, prev_loss=7.63e-5] Training epochs on cuda: 100%|██████████| 10/10 [03:51<00:00, 23.11s/epoch, loss=7.18e-5, prev_loss=7.63e-5] 2021-09-17:18:46:48,735 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmptkzqsvqb' 2021-09-17:18:46:48,872 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmptkzqsvqb' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:59<00:00, 52.08s/epoch, loss=6.96e-5, prev_loss=7.42e-5] Training epochs on cuda: 100%|██████████| 10/10 [03:59<00:00, 23.91s/epoch, loss=6.96e-5, prev_loss=7.42e-5] 2021-09-17:18:53:08,97 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp6zubco6a' 2021-09-17:18:53:08,261 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp6zubco6a' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:01<00:00, 52.39s/epoch, loss=6.8e-5, prev_loss=7.23e-5] Training epochs on cuda: 100%|██████████| 10/10 [04:01<00:00, 24.18s/epoch, loss=6.8e-5, prev_loss=7.23e-5] 2021-09-17:18:59:30,267 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpwffmoz_9' 2021-09-17:18:59:30,433 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpwffmoz_9' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:50<00:00, 51.34s/epoch, loss=7.76e-5, prev_loss=8.3e-5] Training epochs on cuda: 100%|██████████| 10/10 [03:50<00:00, 23.08s/epoch, loss=7.76e-5, prev_loss=8.3e-5] 2021-09-17:19:05:41,321 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpbqvkpgv5' 2021-09-17:19:05:41,490 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpbqvkpgv5' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:50<00:00, 51.46s/epoch, loss=7.79e-5, prev_loss=8.35e-5] Training epochs on cuda: 100%|██████████| 10/10 [03:50<00:00, 23.08s/epoch, loss=7.79e-5, prev_loss=8.35e-5] 2021-09-17:19:11:52,351 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpvzubf2vs' 2021-09-17:19:11:52,577 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpvzubf2vs' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:47<00:00, 51.18s/epoch, loss=0.000266, prev_loss=0.000265] Training epochs on cuda: 100%|██████████| 10/10 [03:47<00:00, 22.78s/epoch, loss=0.000266, prev_loss=0.000265] 2021-09-17:19:18:00,846 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpaf_62gxd' 2021-09-17:19:18:00,957 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpaf_62gxd' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:49<00:00, 51.22s/epoch, loss=0.000283, prev_loss=0.000283] Training epochs on cuda: 100%|██████████| 10/10 [03:49<00:00, 22.95s/epoch, loss=0.000283, prev_loss=0.000283] 2021-09-17:19:24:10,503 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmph0h98au_' 2021-09-17:19:24:10,676 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmph0h98au_' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:50<00:00, 51.29s/epoch, loss=0.000733, prev_loss=0.000734] Training epochs on cuda: 100%|██████████| 10/10 [03:50<00:00, 23.04s/epoch, loss=0.000733, prev_loss=0.000734] 2021-09-17:19:30:21,163 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpvycgo8vx' 2021-09-17:19:30:21,327 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpvycgo8vx' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:00<00:00, 52.29s/epoch, loss=0.00066, prev_loss=0.00066] Training epochs on cuda: 100%|██████████| 10/10 [04:00<00:00, 24.05s/epoch, loss=0.00066, prev_loss=0.00066] 2021-09-17:19:36:41,902 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpopar9xlj' 2021-09-17:19:36:42,44 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpopar9xlj' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:02<00:00, 52.43s/epoch, loss=7.34e-5, prev_loss=7.87e-5] Training epochs on cuda: 100%|██████████| 10/10 [04:02<00:00, 24.21s/epoch, loss=7.34e-5, prev_loss=7.87e-5] 2021-09-17:19:43:04,425 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp5r13ssvd' 2021-09-17:19:43:04,576 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp5r13ssvd' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:00<00:00, 52.31s/epoch, loss=7.32e-5, prev_loss=7.91e-5] Training epochs on cuda: 100%|██████████| 10/10 [04:00<00:00, 24.04s/epoch, loss=7.32e-5, prev_loss=7.91e-5] 2021-09-17:19:49:25,68 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp0ftytq0c' 2021-09-17:19:49:25,210 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp0ftytq0c' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:54<00:00, 51.54s/epoch, loss=0.000282, prev_loss=0.000282] Training epochs on cuda: 100%|██████████| 10/10 [03:54<00:00, 23.40s/epoch, loss=0.000282, prev_loss=0.000282] 2021-09-17:19:55:39,334 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpgib9k4hu' 2021-09-17:19:55:39,480 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpgib9k4hu' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:54<00:00, 51.39s/epoch, loss=7.65e-5, prev_loss=8.22e-5] Training epochs on cuda: 100%|██████████| 10/10 [03:54<00:00, 23.44s/epoch, loss=7.65e-5, prev_loss=8.22e-5] 2021-09-17:20:01:53,608 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpla3jo_n5' 2021-09-17:20:01:53,746 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpla3jo_n5' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:57<00:00, 51.78s/epoch, loss=7.38e-5, prev_loss=8.05e-5] Training epochs on cuda: 100%|██████████| 10/10 [03:57<00:00, 23.72s/epoch, loss=7.38e-5, prev_loss=8.05e-5] 2021-09-17:20:08:09,533 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpdohy2coh' 2021-09-17:20:08:09,689 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpdohy2coh' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:44<00:00, 50.56s/epoch, loss=0.000265, prev_loss=0.000265] Training epochs on cuda: 100%|██████████| 10/10 [03:44<00:00, 22.48s/epoch, loss=0.000265, prev_loss=0.000265] 2021-09-17:20:14:13,498 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmprm5ftt7c' 2021-09-17:20:14:13,651 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmprm5ftt7c' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:51<00:00, 51.15s/epoch, loss=7.59e-5, prev_loss=8.26e-5] Training epochs on cuda: 100%|██████████| 10/10 [03:51<00:00, 23.18s/epoch, loss=7.59e-5, prev_loss=8.26e-5] 2021-09-17:20:20:25,553 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpzgf2daw5' 2021-09-17:20:20:25,701 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpzgf2daw5' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:51<00:00, 51.29s/epoch, loss=0.000277, prev_loss=0.000277] Training epochs on cuda: 100%|██████████| 10/10 [03:51<00:00, 23.17s/epoch, loss=0.000277, prev_loss=0.000277] 2021-09-17:20:26:37,378 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpb3borivz' 2021-09-17:20:26:37,572 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpb3borivz' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:45<00:00, 50.63s/epoch, loss=0.000261, prev_loss=0.000261] Training epochs on cuda: 100%|██████████| 10/10 [03:45<00:00, 22.52s/epoch, loss=0.000261, prev_loss=0.000261] 2021-09-17:20:32:42,742 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp2gk9plrm' 2021-09-17:20:32:42,920 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp2gk9plrm' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:58<00:00, 51.88s/epoch, loss=0.000272, prev_loss=0.000273] Training epochs on cuda: 100%|██████████| 10/10 [03:58<00:00, 23.85s/epoch, loss=0.000272, prev_loss=0.000273] 2021-09-17:20:39:01,556 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpntccnohi' 2021-09-17:20:39:01,716 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpntccnohi' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:49<00:00, 51.13s/epoch, loss=0.000284, prev_loss=0.000284] Training epochs on cuda: 100%|██████████| 10/10 [03:49<00:00, 22.98s/epoch, loss=0.000284, prev_loss=0.000284] 2021-09-17:20:45:11,421 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp83dubl_i' 2021-09-17:20:45:11,585 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp83dubl_i' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:59<00:00, 52.06s/epoch, loss=0.000781, prev_loss=0.000785] Training epochs on cuda: 100%|██████████| 10/10 [03:59<00:00, 23.92s/epoch, loss=0.000781, prev_loss=0.000785] 2021-09-17:20:51:30,921 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp2we4w8qw' 2021-09-17:20:51:31,61 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp2we4w8qw' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:53<00:00, 51.52s/epoch, loss=7.26e-5, prev_loss=7.85e-5] Training epochs on cuda: 100%|██████████| 10/10 [03:53<00:00, 23.31s/epoch, loss=7.26e-5, prev_loss=7.85e-5] 2021-09-17:20:57:44,187 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpak09f3vs' 2021-09-17:20:57:44,330 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpak09f3vs' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:41<00:00, 50.13s/epoch, loss=0.000251, prev_loss=0.000252] Training epochs on cuda: 100%|██████████| 10/10 [03:41<00:00, 22.13s/epoch, loss=0.000251, prev_loss=0.000252] 2021-09-17:21:03:45,521 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpma0_n4os' 2021-09-17:21:03:45,648 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpma0_n4os' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:47<00:00, 50.87s/epoch, loss=8.52e-5, prev_loss=9.37e-5] Training epochs on cuda: 100%|██████████| 10/10 [03:47<00:00, 22.74s/epoch, loss=8.52e-5, prev_loss=9.37e-5] 2021-09-17:21:09:52,671 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp8my_nalm' 2021-09-17:21:09:52,776 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp8my_nalm' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:47<00:00, 50.83s/epoch, loss=0.000407, prev_loss=0.000407] Training epochs on cuda: 100%|██████████| 10/10 [03:47<00:00, 22.71s/epoch, loss=0.000407, prev_loss=0.000407] 2021-09-17:21:15:59,600 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmplbk2vz4i' 2021-09-17:21:15:59,751 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmplbk2vz4i' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:40<00:00, 50.03s/epoch, loss=0.00027, prev_loss=0.000269] Training epochs on cuda: 100%|██████████| 10/10 [03:40<00:00, 22.02s/epoch, loss=0.00027, prev_loss=0.000269] 2021-09-17:21:21:59,696 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpz6jgf_hq' 2021-09-17:21:21:59,816 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpz6jgf_hq' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:46<00:00, 50.72s/epoch, loss=0.000279, prev_loss=0.000278] Training epochs on cuda: 100%|██████████| 10/10 [03:46<00:00, 22.66s/epoch, loss=0.000279, prev_loss=0.000278] 2021-09-17:21:28:06,191 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp50euuym2' 2021-09-17:21:28:06,314 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp50euuym2' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:56<00:00, 51.80s/epoch, loss=7.81e-5, prev_loss=8.55e-5] Training epochs on cuda: 100%|██████████| 10/10 [03:56<00:00, 23.63s/epoch, loss=7.81e-5, prev_loss=8.55e-5] 2021-09-17:21:34:22,374 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpmou524nq' 2021-09-17:21:34:22,505 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpmou524nq' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:38<00:00, 49.99s/epoch, loss=0.000261, prev_loss=0.000266] Training epochs on cuda: 100%|██████████| 10/10 [03:38<00:00, 21.81s/epoch, loss=0.000261, prev_loss=0.000266] 2021-09-17:21:40:20,366 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp4h73vk76' 2021-09-17:21:40:20,516 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp4h73vk76' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:55<00:00, 51.83s/epoch, loss=0.000291, prev_loss=0.00029] Training epochs on cuda: 100%|██████████| 10/10 [03:55<00:00, 23.56s/epoch, loss=0.000291, prev_loss=0.00029] 2021-09-17:21:46:37,746 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp9sg5m8z7' 2021-09-17:21:46:37,884 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp9sg5m8z7' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:49<00:00, 51.14s/epoch, loss=7.31e-5, prev_loss=7.89e-5] Training epochs on cuda: 100%|██████████| 10/10 [03:49<00:00, 22.98s/epoch, loss=7.31e-5, prev_loss=7.89e-5] 2021-09-17:21:52:47,348 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpa6o2ghx1' 2021-09-17:21:52:47,467 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpa6o2ghx1' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:44<00:00, 50.57s/epoch, loss=0.000273, prev_loss=0.000273] Training epochs on cuda: 100%|██████████| 10/10 [03:44<00:00, 22.47s/epoch, loss=0.000273, prev_loss=0.000273] 2021-09-17:21:58:51,903 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpfm6za9up' 2021-09-17:21:58:52,18 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpfm6za9up' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:48<00:00, 50.99s/epoch, loss=0.000257, prev_loss=0.000257] Training epochs on cuda: 100%|██████████| 10/10 [03:48<00:00, 22.87s/epoch, loss=0.000257, prev_loss=0.000257] 2021-09-17:22:05:00,482 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp_p3zt9ve' 2021-09-17:22:05:00,612 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp_p3zt9ve' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:39<00:00, 50.20s/epoch, loss=0.000261, prev_loss=0.000264] Training epochs on cuda: 100%|██████████| 10/10 [03:39<00:00, 21.99s/epoch, loss=0.000261, prev_loss=0.000264] 2021-09-17:22:11:00,528 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpr7e0neu_' 2021-09-17:22:11:00,641 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpr7e0neu_' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:47<00:00, 50.64s/epoch, loss=7.79e-5, prev_loss=8.5e-5] Training epochs on cuda: 100%|██████████| 10/10 [03:47<00:00, 22.71s/epoch, loss=7.79e-5, prev_loss=8.5e-5] 2021-09-17:22:17:06,422 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpdcld7nbr' 2021-09-17:22:17:06,577 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpdcld7nbr' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:46<00:00, 50.73s/epoch, loss=0.000528, prev_loss=0.000529] Training epochs on cuda: 100%|██████████| 10/10 [03:46<00:00, 22.61s/epoch, loss=0.000528, prev_loss=0.000529] 2021-09-17:22:23:41,306 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp1ux57rs7' 2021-09-17:22:23:41,492 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp1ux57rs7' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:55<00:00, 51.40s/epoch, loss=0.00029, prev_loss=0.000289] Training epochs on cuda: 100%|██████████| 10/10 [03:55<00:00, 23.50s/epoch, loss=0.00029, prev_loss=0.000289] 2021-09-17:22:29:55,158 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpblkzszod' 2021-09-17:22:29:55,264 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpblkzszod' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:43<00:00, 50.17s/epoch, loss=0.000122, prev_loss=0.000123] Training epochs on cuda: 100%|██████████| 10/10 [03:43<00:00, 22.30s/epoch, loss=0.000122, prev_loss=0.000123] 2021-09-17:22:35:58,58 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpwgh8pubp' 2021-09-17:22:35:58,201 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpwgh8pubp' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:34<00:00, 49.16s/epoch, loss=0.000391, prev_loss=0.000389] Training epochs on cuda: 100%|██████████| 10/10 [03:34<00:00, 21.40s/epoch, loss=0.000391, prev_loss=0.000389] 2021-09-17:22:41:50,949 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp6mhlsrqe' 2021-09-17:22:41:51,65 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp6mhlsrqe' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:54<00:00, 51.53s/epoch, loss=0.000291, prev_loss=0.000291] Training epochs on cuda: 100%|██████████| 10/10 [03:54<00:00, 23.48s/epoch, loss=0.000291, prev_loss=0.000291] 2021-09-17:22:48:04,332 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp3ktblg64' 2021-09-17:22:48:04,488 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp3ktblg64' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:46<00:00, 50.74s/epoch, loss=0.000272, prev_loss=0.000273] Training epochs on cuda: 100%|██████████| 10/10 [03:46<00:00, 22.68s/epoch, loss=0.000272, prev_loss=0.000273] 2021-09-17:22:54:10,906 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpe_jjrlnu' 2021-09-17:22:54:11,23 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpe_jjrlnu' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:42<00:00, 50.26s/epoch, loss=0.000289, prev_loss=0.000289] Training epochs on cuda: 100%|██████████| 10/10 [03:42<00:00, 22.23s/epoch, loss=0.000289, prev_loss=0.000289] 2021-09-17:23:00:13,241 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp53fuafj1' 2021-09-17:23:00:13,383 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp53fuafj1' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:53<00:00, 51.42s/epoch, loss=6.18e-5, prev_loss=6.63e-5] Training epochs on cuda: 100%|██████████| 10/10 [03:53<00:00, 23.39s/epoch, loss=6.18e-5, prev_loss=6.63e-5] 2021-09-17:23:06:26,850 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp1w1755_7' 2021-09-17:23:06:27,17 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp1w1755_7' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:46<00:00, 50.66s/epoch, loss=6.61e-5, prev_loss=7.11e-5] Training epochs on cuda: 100%|██████████| 10/10 [03:46<00:00, 22.67s/epoch, loss=6.61e-5, prev_loss=7.11e-5] 2021-09-17:23:12:33,375 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpbba83k4_' 2021-09-17:23:12:33,493 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpbba83k4_' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:52<00:00, 51.35s/epoch, loss=7.04e-5, prev_loss=7.7e-5] Training epochs on cuda: 100%|██████████| 10/10 [03:52<00:00, 23.21s/epoch, loss=7.04e-5, prev_loss=7.7e-5] 2021-09-17:23:18:45,55 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpxhqc05xe' 2021-09-17:23:18:45,197 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpxhqc05xe' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:49<00:00, 51.19s/epoch, loss=0.000258, prev_loss=0.000257] Training epochs on cuda: 100%|██████████| 10/10 [03:49<00:00, 22.98s/epoch, loss=0.000258, prev_loss=0.000257] 2021-09-17:23:24:55,372 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp11u3uef6' 2021-09-17:23:24:55,516 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp11u3uef6' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:37<00:00, 49.84s/epoch, loss=0.000251, prev_loss=0.000253] Training epochs on cuda: 100%|██████████| 10/10 [03:37<00:00, 21.74s/epoch, loss=0.000251, prev_loss=0.000253] 2021-09-17:23:30:52,636 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpaxsyek2h' 2021-09-17:23:30:52,770 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpaxsyek2h' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:44<00:00, 50.56s/epoch, loss=0.000236, prev_loss=0.000236] Training epochs on cuda: 100%|██████████| 10/10 [03:44<00:00, 22.44s/epoch, loss=0.000236, prev_loss=0.000236] 2021-09-17:23:36:56,808 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpmdlp_dvg' 2021-09-17:23:36:56,928 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpmdlp_dvg' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:55<00:00, 51.63s/epoch, loss=7.23e-5, prev_loss=7.9e-5] Training epochs on cuda: 100%|██████████| 10/10 [03:55<00:00, 23.60s/epoch, loss=7.23e-5, prev_loss=7.9e-5] 2021-09-17:23:43:12,672 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp1ktfrdz0' 2021-09-17:23:43:12,800 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp1ktfrdz0' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [04:00<00:00, 52.79s/epoch, loss=0.000266, prev_loss=0.000265] Training epochs on cuda: 100%|██████████| 10/10 [04:00<00:00, 24.04s/epoch, loss=0.000266, prev_loss=0.000265] 2021-09-17:23:49:35,729 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpxaocnnm_' 2021-09-17:23:49:35,883 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpxaocnnm_' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:40<00:00, 50.25s/epoch, loss=0.000263, prev_loss=0.000263] Training epochs on cuda: 100%|██████████| 10/10 [03:40<00:00, 22.02s/epoch, loss=0.000263, prev_loss=0.000263] 2021-09-17:23:55:35,832 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmprigl2ri5' 2021-09-17:23:55:35,976 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmprigl2ri5' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [02:50<00:00, 45.08s/epoch, loss=0.000299, prev_loss=0.000325] Training epochs on cuda: 100%|██████████| 10/10 [02:50<00:00, 17.02s/epoch, loss=0.000299, prev_loss=0.000325] 2021-09-18:00:00:45,467 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpg0dtfrbq' 2021-09-18:00:00:45,580 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpg0dtfrbq' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:48<00:00, 51.01s/epoch, loss=0.000267, prev_loss=0.000267] Training epochs on cuda: 100%|██████████| 10/10 [03:48<00:00, 22.86s/epoch, loss=0.000267, prev_loss=0.000267] 2021-09-18:00:06:53,922 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpj_pr2udx' 2021-09-18:00:06:54,44 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpj_pr2udx' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:56<00:00, 51.86s/epoch, loss=0.00026, prev_loss=0.000261] Training epochs on cuda: 100%|██████████| 10/10 [03:56<00:00, 23.61s/epoch, loss=0.00026, prev_loss=0.000261] 2021-09-18:00:13:10,41 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpr4fnuz0t' 2021-09-18:00:13:10,147 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpr4fnuz0t' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:33<00:00, 49.66s/epoch, loss=0.000256, prev_loss=0.000255] Training epochs on cuda: 100%|██████████| 10/10 [03:33<00:00, 21.30s/epoch, loss=0.000256, prev_loss=0.000255] 2021-09-18:00:30:40,188 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpnkda10b4' 2021-09-18:00:30:40,381 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpnkda10b4' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [02:58<00:00, 45.85s/epoch, loss=0.000329, prev_loss=0.000327] Training epochs on cuda: 100%|██████████| 10/10 [02:58<00:00, 17.82s/epoch, loss=0.000329, prev_loss=0.000327] 2021-09-18:00:35:57,894 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpsohwc03u' 2021-09-18:00:35:58,8 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpsohwc03u' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:43<00:00, 50.54s/epoch, loss=0.000305, prev_loss=0.000329] Training epochs on cuda: 100%|██████████| 10/10 [03:43<00:00, 22.36s/epoch, loss=0.000305, prev_loss=0.000329] 2021-09-18:00:42:00,651 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmprgb8i4jy' 2021-09-18:00:42:00,791 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmprgb8i4jy' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:53<00:00, 51.47s/epoch, loss=6.41e-5, prev_loss=6.87e-5] Training epochs on cuda: 100%|██████████| 10/10 [03:53<00:00, 23.37s/epoch, loss=6.41e-5, prev_loss=6.87e-5] 2021-09-18:00:48:14,90 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpjla593o8' 2021-09-18:00:48:14,235 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpjla593o8' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:46<00:00, 50.69s/epoch, loss=0.000533, prev_loss=0.000536] Training epochs on cuda: 100%|██████████| 10/10 [03:46<00:00, 22.65s/epoch, loss=0.000533, prev_loss=0.000536] 2021-09-18:00:54:20,33 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp37oe0h4f' 2021-09-18:00:54:20,153 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp37oe0h4f' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:48<00:00, 50.89s/epoch, loss=0.000248, prev_loss=0.000248] Training epochs on cuda: 100%|██████████| 10/10 [03:48<00:00, 22.89s/epoch, loss=0.000248, prev_loss=0.000248] 2021-09-18:01:00:28,281 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmp52ider_g' 2021-09-18:01:00:28,396 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmp52ider_g' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:55<00:00, 51.98s/epoch, loss=0.000128, prev_loss=0.000128] Training epochs on cuda: 100%|██████████| 10/10 [03:55<00:00, 23.54s/epoch, loss=0.000128, prev_loss=0.000128] 2021-09-18:01:06:42,925 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpltcqneud' 2021-09-18:01:06:43,58 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpltcqneud' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:46<00:00, 50.58s/epoch, loss=8.04e-5, prev_loss=8.81e-5] Training epochs on cuda: 100%|██████████| 10/10 [03:46<00:00, 22.62s/epoch, loss=8.04e-5, prev_loss=8.81e-5] 2021-09-18:01:12:48,773 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpeiq7uj6g' 2021-09-18:01:12:48,898 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpeiq7uj6g' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:35<00:00, 49.62s/epoch, loss=0.000263, prev_loss=0.000263] Training epochs on cuda: 100%|██████████| 10/10 [03:35<00:00, 21.51s/epoch, loss=0.000263, prev_loss=0.000263] 2021-09-18:01:18:43,193 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpys2wk_2i' 2021-09-18:01:18:43,322 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpys2wk_2i' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:46<00:00, 50.64s/epoch, loss=0.000272, prev_loss=0.000276] Training epochs on cuda: 100%|██████████| 10/10 [03:46<00:00, 22.60s/epoch, loss=0.000272, prev_loss=0.000276] 2021-09-18:01:24:48,738 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpi5ye1wbf' 2021-09-18:01:24:48,872 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpi5ye1wbf' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:32<00:00, 49.27s/epoch, loss=8.56e-5, prev_loss=9.36e-5] Training epochs on cuda: 100%|██████████| 10/10 [03:32<00:00, 21.22s/epoch, loss=8.56e-5, prev_loss=9.36e-5] 2021-09-18:01:30:40,29 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpl8nednvi' 2021-09-18:01:30:40,155 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpl8nednvi' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:32<00:00, 49.35s/epoch, loss=0.000241, prev_loss=0.000241] Training epochs on cuda: 100%|██████████| 10/10 [03:32<00:00, 21.26s/epoch, loss=0.000241, prev_loss=0.000241] 2021-09-18:01:36:32,70 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpoep8x299' 2021-09-18:01:36:32,183 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpoep8x299' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:31<00:00, 49.22s/epoch, loss=0.000202, prev_loss=0.000206] Training epochs on cuda: 100%|██████████| 10/10 [03:31<00:00, 21.13s/epoch, loss=0.000202, prev_loss=0.000206] 2021-09-18:01:42:22,606 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpoualhgco' 2021-09-18:01:42:22,730 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpoualhgco' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 Saved checkpoint after having finished epoch 10. Training epochs on cuda: 100%|██████████| 10/10 [03:39<00:00, 49.89s/epoch, loss=0.000263, prev_loss=0.000263] Training epochs on cuda: 100%|██████████| 10/10 [03:39<00:00, 21.95s/epoch, loss=0.000263, prev_loss=0.000263] 2021-09-18:01:48:21,487 INFO [training_loop.py:1092] => loading checkpoint '/tmp/tmpk0u2u0b7' 2021-09-18:01:48:21,615 INFO [training_loop.py:1135] => loaded checkpoint '/tmp/tmpk0u2u0b7' stopped after having finished epoch 10 Evaluating on cuda: 0%| | 0.00/38.8k [00:00 synset) to (word -> word) Number of edges with words as nodes = 388301 triples' shape: (388301, 3) TriplesFactory(num_entities=45182, num_relations=31, num_triples=388301, inverse_triples=False) HpoPipelineResult(study=, objective=Objective(dataset=None, model=, loss=, optimizer=, training_loop=, stopper=, evaluator=, result_tracker=, metric='adjusted_arithmetic_mean_rank_index', dataset_kwargs=None, training=TriplesFactory(num_entities=45182, num_relations=31, num_triples=310640, inverse_triples=False), testing=TriplesFactory(num_entities=45182, num_relations=31, num_triples=38830, inverse_triples=False), validation=TriplesFactory(num_entities=45182, num_relations=31, num_triples=38831, inverse_triples=False), evaluation_entity_whitelist=None, evaluation_relation_whitelist=None, model_kwargs={'embedding_dim': 100}, model_kwargs_ranges=None, loss_kwargs=None, loss_kwargs_ranges=None, regularizer=, regularizer_kwargs=None, regularizer_kwargs_ranges=None, optimizer_kwargs=None, optimizer_kwargs_ranges=None, training_loop_kwargs=None, negative_sampler=, negative_sampler_kwargs=None, negative_sampler_kwargs_ranges=None, training_kwargs={'num_epochs': 10}, training_kwargs_ranges=None, stopper_kwargs=None, evaluator_kwargs=None, evaluation_kwargs=None, filter_validation_when_testing=True, result_tracker_kwargs=None, device=None, save_model_directory=None))