首頁>技術>

使用Python的高階專案

Python具有廣泛的應用程式-從“ Hello World”一直到實現人工智慧的一切。

您可以使用Python進行幾乎無限的專案,但是如果您想深入瞭解Python的核心,可以考慮以下主要專案。

使用PyTorch,TensorFlow,Keras和 任何您喜歡的機器學習庫進行機器學習。使用OpenCV和PIL的計算機視覺。使用測試和文件建立和釋出自己的pip模組。

在所有這些中,最喜歡的絕對是從事機器學習和深度學習。來看一個非常好的例子,以深入學習Python。

在Python中使用TensorFlow實現CIFAR10

讓我們 訓練 一個 網路 , 使用內建的TensorFlow卷積神經網路 對來自CIFAR10資料集的影象進行分類 。

考慮以下 流程圖,瞭解 用例 的 工作原理

讓我們將此流程分解解說:

首先將影象載入到程式中這些影象儲存在程式可以訪問的位置需要Python來理解存在的資訊,因此我們需要進行標準化過程我們定義了神經網路的基礎定義損失函式以確保我們在資料集上獲得最大的準確性訓練實際模型以瞭解有關其在這期間看到的資料的一些資訊測試模型以分析準確性並在訓練過程中進行迭代以獲得更好的準確性

該用例分為2個程式。一種是訓練網路,另一種是測試網路。

首先訓練網路。

訓練神經網路:
import numpy as np import tensorflow as tf from time import time import math from include.data import get_data_set from include.model import model, lr train_x, train_y = get_data_set("train") test_x, test_y = get_data_set("test") tf.set_random_seed(21) x, y, output, y_pred_cls, global_step, learning_rate = model() global_accuracy = 0 epoch_start = 0 # PARAMS _BATCH_SIZE = 128 _EPOCH = 60 _SAVE_PATH = "./tensorboard/cifar-10-v1.0.0/" # LOSS AND OPTIMIZER loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=output, labels=y)) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,                                    beta1=0.9,                                    beta2=0.999,                                    epsilon=1e-08).minimize(loss, global_step=global_step) # PREDICTION AND ACCURACY CALCULATION correct_prediction = tf.equal(y_pred_cls, tf.argmax(y, axis=1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # SAVER merged = tf.summary.merge_all() saver = tf.train.Saver() sess = tf.Session() train_writer = tf.summary.FileWriter(_SAVE_PATH, sess.graph) try:     print("Trying to restore last checkpoint ...")     last_chk_path = tf.train.latest_checkpoint(checkpoint_dir=_SAVE_PATH)     saver.restore(sess, save_path=last_chk_path)     print("Restored checkpoint from:", last_chk_path) except ValueError:     print("Failed to restore checkpoint. Initializing variables instead.")     sess.run(tf.global_variables_initializer()) def train(epoch):     global epoch_start     epoch_start = time()     batch_size = int(math.ceil(len(train_x) / _BATCH_SIZE))     i_global = 0     for s in range(batch_size):         batch_xs = train_x[s*_BATCH_SIZE: (s+1)*_BATCH_SIZE]         batch_ys = train_y[s*_BATCH_SIZE: (s+1)*_BATCH_SIZE]           start_time = time()         i_global, _, batch_loss, batch_acc = sess.run(             [global_step, optimizer, loss, accuracy],             feed_dict={x: batch_xs, y: batch_ys, learning_rate: lr(epoch)})         duration = time() - start_time         if s % 10 == 0:             percentage = int(round((s/batch_size)*100))               bar_len = 29             filled_len = int((bar_len*int(percentage))/100)             bar = '=' * filled_len + '>' + '-' * (bar_len - filled_len)               msg = "Global step: {:>5} - [{}] {:>3}% - acc: {:.4f} - loss: {:.4f} - {:.1f} sample/sec"             print(msg.format(i_global, bar, percentage, batch_acc, batch_loss, _BATCH_SIZE / duration))       test_and_save(i_global, epoch)     def test_and_save(_global_step, epoch):     global global_accuracy     global epoch_start       i = 0     predicted_class = np.zeros(shape=len(test_x), dtype=np.int)     while i < len(test_x): j = min(i + _BATCH_SIZE, len(test_x)) batch_xs = test_x[i:j, :] batch_ys = test_y[i:j, :] predicted_class[i:j] = sess.run( y_pred_cls, feed_dict={x: batch_xs, y: batch_ys, learning_rate: lr(epoch)} ) i = j correct = (np.argmax(test_y, axis=1) == predicted_class) acc = correct.mean()*100 correct_numbers = correct.sum() hours, rem = divmod(time() - epoch_start, 3600) minutes, seconds = divmod(rem, 60) mes = " Epoch {} - accuracy: {:.2f}% ({}/{}) - time: {:0>2}:{:0>2}:{:05.2f}"     print(mes.format((epoch+1), acc, correct_numbers, len(test_x), int(hours), int(minutes), seconds))       if global_accuracy != 0 and global_accuracy < acc: summary = tf.Summary(value=[ tf.Summary.Value(tag="Accuracy/test", simple_value=acc), ]) train_writer.add_summary(summary, _global_step) saver.save(sess, save_path=_SAVE_PATH, global_step=_global_step) mes = "This epoch receive better accuracy: {:.2f} > {:.2f}. Saving session..."         print(mes.format(acc, global_accuracy))         global_accuracy = acc       elif global_accuracy == 0:         global_accuracy = acc       print("###########################################################################################################")     def main():     train_start = time()       for i in range(_EPOCH):         print("Epoch: {}/{}".format((i+1), _EPOCH))         train(i)       hours, rem = divmod(time() - train_start, 3600)     minutes, seconds = divmod(rem, 60)     mes = "Best accuracy pre session: {:.2f}, time: {:0>2}:{:0>2}:{:05.2f}"     print(mes.format(global_accuracy, int(hours), int(minutes), seconds))     if __name__ == "__main__":     main()     sess.close()

輸出:

紀元:60/60全域性步:23070-[ > -----------------------------] 0%-acc:0.9531-損失:1.5081-7045.4樣本/秒全域性步進:23080-[ > -----------------------------] 3%-acc:0.9453-損失:1.5159-7147.6樣本/秒全域性步:23090-[ => ----------------------------] 5%-累積:0.9844-損失:1.4764-7154.6樣本/秒全域性步:23100-[ ==> ---------------------------] 8%-acc:0.9297-損失:1.5307-7104.4樣本/秒全域性步:23110-[ ==> ---------------------------] 10%-acc:0.9141-損失:1.5462-7091.4樣本/秒全域性步:23120-[ ===> --------------------------] 13%-acc:0.9297-損失:1.5314-7162.9樣本/秒全域性步驟:23130-[ ====> -------------------------] 15%-acc:0.9297-損失:1.5307-7174.8樣本/秒全域性步:23140-[ =====> ------------------------] 18%-acc:0.9375-損失:1.5231-7140.0樣本/秒全域性步:23150-[ =====> ------------------------] 20%-acc:0.9297-損失:1.5301-7152.8樣本/秒全域性步:23160-[ ======> -----------------------] 23%-acc:0.9531-損失:1.5080-7112.3樣本/秒全域性步:23170-[ =======> ----------------------] 26%-acc:0.9609-損失:1.5000-7154.0樣本/秒全域性步長:23180-[ ========> ---------------------] 28%-acc:0.9531-損失:1.5074-6862.2樣本/秒全域性步驟:23190-[ ========> ---------------------] 31%-acc:0.9609-損失:1.4993-7134.5樣本/秒全域性步驟:23200-[ =========> --------------------] 33%-acc:0.9609-損失:1.4995-7166.0樣本/秒全域性步驟:23210-[ ========== -------------------] 36%-acc:0.9375-損失:1.5231-7116.7樣本/秒全域性步長:23220-[ =========== ------------------] 38%-acc:0.9453-損失:1.5153-7134.1樣本/秒全域性步驟:23230-[ =========== ------------------] 41%-acc:0.9375-損失:1.5233-7074.5樣本/秒全域性步驟:23240-[ ============> -----------------] 43%-acc:0.9219-損失:1.5387-7176.9樣本/秒全域性步:23250-[ ============> ------------------] 46%-acc:0.8828-損失:1.5769-7144.1樣本/秒全域性步驟:23260-[ =============> ---------------] 49%-acc:0.9219-損失:1.5383-7059.7樣本/秒全域性步驟:23270-[ =============> ---------------] 51%-acc:0.8984-損失:1.5618-6638.6樣本/秒全域性步驟:23280-[ ==============> --------------] 54%-acc:0.9453-損失:1.5151-7035.7樣本/秒全域性步驟:23290-[ ===============> -------------] 56%-acc:0.9609-損失:1.4996-7129.0樣本/秒全域性步:23300-[ ================> ------------] 59%-acc:0.9609-損失:1.4997-7075.4樣本/秒全域性步驟:23310-[ ================> ------------] 61%-acc:0.8750-損失:1.5842-7117.8樣本/秒全域性步驟:23320-[ =================> -----------] 64%-acc:0.9141-損失:1.5463-7157.2樣本/秒全域性步驟:23330-[ ==================> ----------] 66%-acc:0.9062-損失:1.5549-7169.3樣本/秒全域性步驟:23340-[ ===================> ---------] 69%-acc:0.9219-損失:1.5389-7164.4樣本/秒全域性步驟:23350-[ ==================> ---------] 72%-acc:0.9609-損失:1.5002-7135.4樣本/秒全域性步驟:23360-[ ===================> --------] 74%-acc:0.9766-損失:1.4842-7124.2樣本/秒全域性步驟:23370-[ ====================> -------] 77%-acc:0.9375-損失:1.5231-7168.5樣本/秒整體步驟:23380-[ ====================> -------] 79%-acc:0.8906-損失:1.5695-7175.2樣本/秒全域性步驟:23390-[ =====================> ------] 82%-acc:0.9375-損失:1.5225-7132.1樣本/秒全域性步驟:23400-[ =======================> -----] 84%-acc:0.9844-損失:1.4768-7100.1樣本/秒全域性步驟:23410-[ ========================> ----] 87%-acc:0.9766-損失:1.4840-7172.0樣本/秒總體步驟:23420-[ ========================== ---] 90%-acc:0.9062-損失:1.5542-7122.1樣本/秒全域性步驟:23430-[ ========================== ---] 92%-acc:0.9297-損失:1.5313-7145.3樣本/秒全域性步驟:23440-[ ==========================> -] 95%-acc:0.9297-損失:1.5301-7133.3樣本/秒全域性步驟:23450-[ ===========================> -] 97%-acc:0.9375-損失:1.5231-7135.7樣本/秒全域性步驟:23460-[ ============================> ] 100%-acc:0.9250-損失:1.5362-10297.5樣本/秒時代60-準確性:78.81%(7881/10000)此時期的準確性更高:78.81 > 78.78。正在儲存會話... ########################################### ############################################### ############
用測試資料集執行模型:
import numpy as np import tensorflow as tf   from include.data import get_data_set from include.model import model     test_x, test_y = get_data_set("test") x, y, output, y_pred_cls, global_step, learning_rate = model()     _BATCH_SIZE = 128 _CLASS_SIZE = 10 _SAVE_PATH = "./tensorboard/cifar-10-v1.0.0/"     saver = tf.train.Saver() sess = tf.Session()     try:     print("Trying to restore last checkpoint ...")     last_chk_path = tf.train.latest_checkpoint(checkpoint_dir=_SAVE_PATH)     saver.restore(sess, save_path=last_chk_path)     print("Restored checkpoint from:", last_chk_path) except ValueError:     print("Failed to restore checkpoint. Initializing variables instead.")     sess.run(tf.global_variables_initializer())  def main():     i = 0     predicted_class = np.zeros(shape=len(test_x), dtype=np.int)     while i < len(test_x):         j = min(i + _BATCH_SIZE, len(test_x))         batch_xs = test_x[i:j, :]         batch_ys = test_y[i:j, :]         predicted_class[i:j] = sess.run(y_pred_cls, feed_dict={x: batch_xs, y: batch_ys})         i = j       correct = (np.argmax(test_y, axis=1) == predicted_class)     acc = correct.mean() * 100     correct_numbers = correct.sum()     print()     print("Accuracy on Test-Set: {0:.2f}% ({1} / {2})".format(acc, correct_numbers, len(test_x))) if __name__ == "__main__":     main() sess.close()

簡單輸出:

嘗試還原上一個檢查點...從以下位置恢復檢查點:./tensorboard/cifar-10-v1.0.0/-23460測試儀精度:78.81%(7881/10000)

那是一個非常有趣的用例,不是嗎?透過本文,瞭解了機器學習的工作原理,並開發了一個基本程式來使用Python中的TensorFlow庫來實現它。

結論

本文討論的Python專案應該可以幫助您開始學習Python,它會讓您沉迷並推動您實際學習更多有關Python的知識。 當您嘗試考慮問題並使用Python提供解決方案時,這將非常方便。 Python會幫你解決多個現實生活的專案,以及和這些概念將讓你達到速度如何就可以開始探索和了解專案的實際操作經驗。

從今天開始,你不止是是個小白,或是熟手,大牛之路已經在你眼前緩緩展開。

12
最新評論
  • BSA-TRITC(10mg/ml) TRITC-BSA 牛血清白蛋白改性標記羅丹明
  • 一文看懂 session 和 cookie