AI CENTER
  • ABOUT
    • Our Mission
    • Our People​
    • OUR EVENT
    • Our Equipment
    • Our Space
    • Our Logo
    • Our Publication
  • Our research
    • Research Topic
    • PROJECTS
    • Digital Twin
  • Our Educational Programs
    • Summer Internship Program
    • DOCTORAL PROGRAM
    • courses
  • Contact
  • Search
  • BLOG

AI Seminar
110-2

No.
Date
Contents
Reference
Video
1
2022/03/16
AI Seminar: Physics-Informed Machine Learning​
  1. Stoudenmire, E. & Schwab, D. J. Supervised learning with tensor networks. Adv. Neural Inf. Process. Syst. 29, 4799–4807 (2016).
  2. Mathews, A., Francisquez, M., Hughes, J. & Hatch, D. Uncovering edge plasma dynamics via deep learning from partial observations. Preprint at arXiv https://arxiv.org/abs/2009.05005 (2020).
  3. Shukla, K., Di Leoni, P. C., Blackshire, J., Sparkman, D. & Karniadakis, G. E. Physics-informed neural network for ultrasound nondestructive quantification of surface breaking cracks. J. Nondestruct. Eval. 39, 1–20 (2020).
  4. Lu, L. et al. Extraction of mechanical properties of materials through deep learning from instrumented indentation. Proc. Natl Acad. Sci. USA 117, 7052–7062 (2020).
  5. Xu, Z.-Q. J., Zhang, Y., Luo, T., Xiao, Y. & Ma, Z. Frequency principle: Fourier analysis sheds light on deep neural networks. Commun. Comput. Phys. 28, 1746–1767 (2020).
  6. Rahaman, N. et al. On the spectral bias of neural networks. Proc. Int. Conf. Mach. Learn. 97, 5301–5310 (2019).
  7. Wang, S., Yu, X. & Perdikaris, P. When and why PINNs fail to train: a neural tangent kernel perspective. Preprint at arXiv https://arxiv.org/abs/2007.14527 (2020).
  8. Wang, S., Wang, H. & Perdikaris, P. On the eigenvector bias of Fourier feature networks: from regression to solving multi- scale PDEs with physics- informed neural networks. Preprint at arXiv https://arxiv.org/abs/2012.10047 (2020).
  9. Wang, S., Teng, Y. & Perdikaris, P. Understanding and mitigating gradient pathologies in physics-informed neural networks. Preprint at arXiv https://arxiv.org/abs/2001.04536 (2020).
  10. Link
  11. ​ppt
Link
2
2022/04/06
AI Seminar--An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale 
  1. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., & Gelly, S. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.
  2. He, K., Chen, X., Xie, S., Li, Y., Dollár, P., & Girshick, R. (2021). Masked autoencoders are scalable vision learners. arXiv preprint arXiv:2111.06377.
  3. Kolesnikov, A., Beyer, L., Zhai, X., Puigcerver, J., Yung, J., Gelly, S., & Houlsby, N. (2020). Big transfer (bit): General visual representation learning. arXiv preprint arXiv:1912.11370
  4. Yannic Kilcher (2020). An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (Paper Explained). Retrieved from https://youtu.be/TrdevFK_am4
  5. 科技猛兽 (2021). Vision Transformer 超详细解读 (原理分析+代码解读) (二) Retrieved from https://zhuanlan.zhihu.com/p/342261872
  6. AppleHank (2020). ViT — Vision Transformer : Convolution is dead, long live Transformers! Retrieved from https://applehank.medium.com/vit-vision-transformer-convolution-is-dead-long-live-the-self-attention-bbced72a8487
  7. https://github.com/google-research/vision_transformer/
  8. https://github.com/lucidrains/vit-pytorch
  9. ppt
Link
3
2022/06/01
Seminar on Artificial Intelligence
for Engineering Applications –
Capsule Networks
  1. Sabour, S., Frosst, N., & Hinton, G. E. (2017). Dynamic routing between capsules.
    Advances in neural information processing systems, 30.
  2. Pechyonkin, M. (2017) Understanding Hinton’s Capsule Networks, retrieved May 19, 2022, from: https://medium.com/ai%C2%B3-theory-practicebusiness/understanding-hintons-capsule-networks-part-iii-dynamic-routingbetween-capsules-349f6d30418
  3. Lee, H. Y. (2017). Capsule, Deep Learning Online Course, retrieved May 19, 2022, from: https://www.bilibili.com/video/BV1Qx411V75M/
  4. Géron, A. (2017). Capsule Networks (CapsNets) – Tutorial, retrieved May 19, 2022, from: https://www.youtube.com/watch?v=pPN8d0E3900
  5. Bielski, A. (2019). Dynamic Routing Between Capsules - PyTorch implementation, retrieved May 19, 2022, from: https://github.com/adambielski/CapsNet-pytorch
  6. Xinyi, Z., & Chen, L. (2018). Capsule graph neural network. International conference on learning representations,
  7. Kipf, T. N., & Welling, M. (2016). Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907.
  8. ppt
Link
圖片

​©
NCREE - NTUCE Joint Artificial Intelligence Research Center. All Rights Reserved.
Address : 台北市大安區辛亥路三段200號
Email : [email protected]
  • ABOUT
    • Our Mission
    • Our People​
    • OUR EVENT
    • Our Equipment
    • Our Space
    • Our Logo
    • Our Publication
  • Our research
    • Research Topic
    • PROJECTS
    • Digital Twin
  • Our Educational Programs
    • Summer Internship Program
    • DOCTORAL PROGRAM
    • courses
  • Contact
  • Search
  • BLOG