AI CENTER
  • ABOUT
    • Our Mission
    • Our People​
    • OUR EVENT
    • Our Equipment
    • Our Space
    • Our Logo
    • Our Publication
  • Our research
    • Research Topic
    • PROJECTS
    • Digital Twin
  • Our Educational Programs
    • Summer Internship Program
    • DOCTORAL PROGRAM
    • courses
  • Contact
  • Search
  • BLOG

AI Seminar
110-1

No.
Date
Contents
Reference
Video
1
2021/10/03
Seminar on Artificial Intelligence for Engineering Applications - Introduction to ML
Links
Link
2
2021/10/20
​Seminar on Artificial Intelligence for Engineering Applications - Do Vision Transformers See Like Convolutional Neural Networks?​
Video
Link
3
2021/11/02
​Seminar on Artificial Intelligence for Engineering Applications - Boltzmann Generators -- Sampling Equilibrium States of Many-Body Systems with Deep Learning
Paper
​Videos​
​Link
4
2021/11/26
​Seminar on Artificial Intelligence for Engineering Applications - Transformer
​Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems,
Link
5
2021/12/01
​Seminar on Artificial Intelligence for Engineering Applications - U-Net
​Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. International Conference on Medical image computing and computer-assisted intervention,
​Link
6
2022/12/15
​Seminar on Artificial Intelligence for Engineering Applications - GraphSage
Hamilton, W. L., Ying, R., & Leskovec, J. (2017, December). Inductive representation learning on large graphs. In Proceedings of the 31st International Conference on Neural Information Processing Systems (pp. 1025-1035).
​A Comprehensive Case-Study of GraphSage using PyTorchGeometric and Open-Graph-Benchmark, 
Material

Video
​Link
7
2022/01/21
ValueSeminar on Artificial Intelligence for Engineering Applications - Masked Autoencoders Are Scalable Vision Learners
  1. Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
  2. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., & Gelly, S. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.
  3. He, K., Chen, X., Xie, S., Li, Y., Dollár, P., & Girshick, R. (2021). Masked autoencoders are scalable vision learners. arXiv preprint arXiv:2111.06377.
  4. JinWon Lee (2021). PR-355: Masked Autoencoders Are Scalable Vision Learners. Retrieved from youtube 
  5. DeepReader (2021). MAE: Masked Autoencoders Are Scalable Vision Learners. Retrieved from youtube
  6. Jia-Yau Shiau (2021). Masked Autoencoders: 借鏡BERT與ViT的Self-Supervised Learners. Retrieved from youtube
Link
圖片

​©
NCREE - NTUCE Joint Artificial Intelligence Research Center. All Rights Reserved.
Address : 台北市大安區辛亥路三段200號
Email : [email protected]
  • ABOUT
    • Our Mission
    • Our People​
    • OUR EVENT
    • Our Equipment
    • Our Space
    • Our Logo
    • Our Publication
  • Our research
    • Research Topic
    • PROJECTS
    • Digital Twin
  • Our Educational Programs
    • Summer Internship Program
    • DOCTORAL PROGRAM
    • courses
  • Contact
  • Search
  • BLOG