Служба спасения студентов
Служба спасения для студентов

NEURAL NETWORKS APPLICATIONS IN VALUATION OF BANNER AD CREATIVE EFFICIENCY

Стоимость
1500 руб.
Содержание
Теория
Объем
74 лист.
Год написания

Описание работы

Работа пользователя Vseznayka1995
Добрый день! Уважаемые студенты, Вашему вниманию представляется дипломная работа на тему: «NEURAL NETWORKS APPLICATIONS IN VALUATION OF BANNER AD CREATIVE EFFICIENCY »
Оригинальность работы 92%

Table of Contents

Abstract 3
1.    Introduction  5
2.    Literature review   10
3.    Theoretical framework  19
3.1 Neural networks  19
3.2 Data augmentation   26
3.3 Visualizing convolutional neural networks  40
4.    The application of neural networks in creative advertising strategies  52
5.    Results of computational experiments  57
6.    Conclusion  69
Bibliography  72
Additional materials  76


Abstract
               
This work explores the application of convolutional neural networks to advertisement banners, trying to predict whether the banners have a higher than average click-through rate or less than that. The data used in this work was sourced from Mediascope (company) internet monitoring as advertisement banners and their respective click-through rates. The topic of this work was motivated by a particular problem during the research phase in advertising campaign planning.
This work consists of a brief introduction to the topic, as well as the problem addressed in this work and laying the ground for the solution provided.
Then it is followed by a literature review section, which explores the application of neural networks in image classification, as well as techniques for improving model results.
The literature review is a ground for a theoretical framework, which briefly discusses methods and techniques which are important for building an accurate model and avoid overfitting.
Following the very technical parts of this work, in the solution section a real-life application is discussed, which actually motivated the topic of this work. The problem for which the solution is aimed is briefly discussed, together with the advantages and disadvantages of the proposed solution.
Finally, the results of the experiment, which came in a form of convolutional neural network model, are being discussed together with applied techniques, assumptions, and limitations. The visualizations are created to show how the model decides on the classification of an image.
And in conclusion we sum up the results of the work while talking about the application of the solution provided in this work in a new business era.


Bibliography
  1. Liu, X., Wang, X., & Matwin, S. (2018). Interpretable Deep Convolutional Neural Networks via Meta-learning.  International Joint Conference on Neural Networks (IJCNN), https://arxiv.org/pdf/1802.00560.pdf
  2. Zeiler, M.D., and Fergus R. (2014). Visualizing and Understanding Convolutional Networks. European Conference on Computer Vision. https://arxiv.org/abs/1311.2901
  3. Lundberg, S., & Lee, Su-In. (2017). A Unified Approach to Interpreting Model Predictions. Computing Research Repository, abs/1705.07874. https://arxiv.org/abs/1705.07874
  4. Gosiewska, A., and Biecek., P. ( 2019). iBreakDown: Uncertainty of Model Explanations for Non-additive Predictive Models. arXiv preprint arXiv:1903.114. https://arxiv.org/abs/1903.11420
  5. Zintgraf, L. M., Cohen, T. S., and Welling, M.(2016). A new method to visualize deep neural networks. CoRR, abs/1603.02518. http://arxiv.org/ abs/1603.02518
  6. De Veaux, R., Ungar, L. (1997). A brief introduction to neural networks, Technical Report, Williams College, Williamstown, MA. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.33.3938&rep=rep1&type=pdf
  7. Shorten, C., Khoshgoftaar, T.M. (2019). A survey on Image Data Augmentation for Deep Learning. J Big Data 6, 60 (2019). https://doi.org/10.1186/s40537-019-0197-0
  8. Ribeiro, M., & Singh, S., & Guestrin, C. (2016). “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. 97-101. https://www.aclweb.org/anthology/N16-3020/
  9. Lecun, Y., & Bottou, L., & Bengio, Y. & Haffner, P. (1998). Gradient-Based Learning Applied to Document Recognition. Proceedings of the IEEE. 86. 2278 - 2324. 10.1109/5.726791.
  10. Krizhevsky, A. & Sutskever, I. & Hinton, G. (2012). ImageNet Classification with Deep Convolutional Neural Networks. Neural Information Processing Systems. 25. 10.1145/3065386. http://www.cs.toronto.edu/~hinton/absps/imagenet.pdf
  11. Sultana, F.,  &  Sufian, A., & Dutta, P. (2019) Advancements in Image Classification using Convolutional Neural Network. Submitted to 2018 Fourth International Conference on Research in Computational Intelligence and Communication Networks(ICRCICN 2018. https://arxiv.org/abs/1905.03288
  12. He, T.,& Zhang, Z., & Zhang, H., &  Zhang, Z.,& Xie, J. &  Li, M. (2019). Bag of Tricks for Image Classification with Convolutional Neural Networks. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 558-567. https://arxiv.org/abs/1812.01187
  13. Maggiori, E.,& Tarabalka, Y., & Charpiat, G., &  Alliez, P.  (2017). High-resolution image classification with convolutional networks. IEEE International Geoscience and Remote Sensing Symposium - IGARSS 2017, Jul 2017, Fort Worth, United States. https://hal.archives-ouvertes.fr/hal-01660754/document
  14. Shu, M. (2019).  Deep learning for image classification on very small datasets using transfer learning. Creative Components. 345. https://lib.dr.iastate.edu/creativecomponents/345
  15. Özgenel, C.F., and Sorguç, A.G. (2018). Performance Comparison of Pretrained Convolutional Neural Networks on Crack Detection in Buildings. 2018 Proceedings of the 35th ISARC, Berlin, Germany, ISBN 978-3-00-060855-1, pages 693-700. https://doi.org/10.22260/ISARC2018/0094
  16. Potlabathini, H. (2019). Convolution Neural Network for Cooking State Recognition using VGG19. https://rpal.cse.usf.edu/reports/state_recognition_symposium_2019/2019-05.pdf
  17. Kamran, S.A., & Sabbir, A.S. (2017). Efficient yet deep convolutional neural networks for semantic segmentation. https://arxiv.org/pdf/1707.08254.pdf
  18. Perez, L. & Wang, J. (2017). The Effectiveness of Data Augmentation in Image Classification using Deep Learning. https://arxiv.org/abs/1712.04621
  19. Wong, S., & Gatt, A., & Stamatescu, V., & McDonnell, M. (2016). Understanding data augmentation for classification: when to warp? https://arxiv.org/pdf/1609.08764.pdf
  20. Inoue, H. (2018). Data Augmentation by Pairing Samples for Images Classification. https://arxiv.org/pdf/1801.02929.pdf
  21. Fawzi, A., & Samulowitz, H., & Turaga, D., & Frossard, P. (2016). Adaptive data augmentation for image classification. 3688-3692. 10.1109/ICIP.2016.7533048.
  22. O'Gara, S. & McGuinness, K. (2019). Comparing data augmentation strategies for deep image classification. IMVIP 2019: Irish Machine Vision & Image Processing, Technological University Dublin, Dublin, Ireland, August 28-30. doi:10.21427/148b-ar75
  23. Gu, S.,& Pednekar, M., & and Slater, R. (2019) Improve Image Classification Using Data Augmentation and Neural Networks. SMU Data Science Review: Vol. 2 : No. 2 , Article 1. https://scholar.smu.edu/datasciencereview/vol2/iss2/1
  24. Chatfield, K., & Simonyan, K., & Vedaldi, A. & Zisserman, A. (2014). Return of the Devil in the Details: Delving Deep into Convolutional Nets. BMVC 2014 - Proceedings of the British Machine Vision Conference 2014. 10.5244/C.28.6. https://arxiv.org/abs/1405.3531
  25. Kang, G., & Dong, X., & Zheng, L., & Yang, Yi. (2017). PatchShuffle Regularization. https://arxiv.org/abs/1707.07103
  26. Konno, T., & Iwazume, M. (2018). Icing on the Cake: An Easy and Quick Post-Learnig Method You Can Try After Deep Learning. https://arxiv.org/abs/1807.06540
  27. Huang, Y., & Cheng, Y., & Chen, D., & Lee, H., & Ngiam, J., & Le, Q., & Chen, Z. (2018). GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism. https://arxiv.org/abs/1811.06965
  28. Singh, K., & Chaudhary, A., & Kaur, P. (2019). A Machine Learning Approach for Enhancing Defence Against Global Terrorism. 1-5. 10.1109/IC3.2019.8844947. https://ieeexplore.ieee.org/document/8844947
  29. Herlocker, J., & Konstan, J., & Riedl, J. (2001). Explaining Collaborative Filtering Recommendations. Proceedings of the ACM Conference on Computer Supported Cooperative Work. 10.1145/358916.358995. https://grouplens.org/site-content/uploads/explain-CSCW-20001.pdf
  30. Kaufman, Sh. & Rosset, S. & Perlich, C. (2011). Leakage in Data Mining: Formulation, Detection, and Avoidance. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 6. 556-563. 10.1145/2020408.2020496. https://www.cs.umb.edu/~ding/history/470_670_fall_2011/papers/cs670_Tran_PreferredPaper_LeakingInDataMining.pdf
LeCun, Y.,& Bottou, L.,& Bengio, Y.,& and Haffner, P. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, November 1998a. http://yann.lecun.com/exdb/publis/pdf/lecun-98.pdf

 

или напишите нам прямо сейчас:

Написать в WhatsApp Написать в Telegram
Заявка на расчет