常相思,长相思,今天是国内的七夕节❤️对于泰国来说,每年的水灯节是类似国内七夕的浪漫节日充分体现着泰国青年男女最旖旎的爱情
✌️Loi Krathong 来自两个泰语单词。第一个是loi,意思是漂浮,而krathong是一种由香蕉树干或旧面包制成的小型漂浮篮。泰国人通常用折叠的香蕉叶、异国情调的花朵、蜡烛和熏香来制作自己的水灯。然后,他们将小船放入河中,作为祭品献给水神Pra Mae Khongkha
❤️最后,祝天下有情人终成眷属❤️
✌️Loi Krathong 来自两个泰语单词。第一个是loi,意思是漂浮,而krathong是一种由香蕉树干或旧面包制成的小型漂浮篮。泰国人通常用折叠的香蕉叶、异国情调的花朵、蜡烛和熏香来制作自己的水灯。然后,他们将小船放入河中,作为祭品献给水神Pra Mae Khongkha
❤️最后,祝天下有情人终成眷属❤️
#MAE 艺讯#
讲座预告!明晚下午五点
策展与策展性——当代艺术和作为媒介的策展
本次讲座邀请了没顶画廊总监、策展人孙啟栋,围绕当代艺术展策的话题展开讨论策展的类别、策展的意义与“策展性”的功能,结合2022威尼斯双年展“梦想之乳”、柏林双年展“仍然存在!”以及第十五届卡塞尔文献展等当季重要展览进行实际案例分析与解读。
#当代艺术[超话]##艺术分享##艺术讲座#
讲座预告!明晚下午五点
策展与策展性——当代艺术和作为媒介的策展
本次讲座邀请了没顶画廊总监、策展人孙啟栋,围绕当代艺术展策的话题展开讨论策展的类别、策展的意义与“策展性”的功能,结合2022威尼斯双年展“梦想之乳”、柏林双年展“仍然存在!”以及第十五届卡塞尔文献展等当季重要展览进行实际案例分析与解读。
#当代艺术[超话]##艺术分享##艺术讲座#
Audio-MAE
Audio-MAE overview architecture. Figure source: Huang et al. (2022)
We have already seen huge success of masked autoencoder (MAE) applied to computer vision. Huang et al. (2022) proposes an approach, Audio-MAE, that uses a simple extension of MAEs for audio self-supervised learning applied. It builds on top of a Transformer encoder-decoder, where audio spectrogram patches are encoded with a high masking ratio (80%). The decoder is then designed to re-order and decode the encoded context padded with mask tokens with the goal to reconstruct the input.
Audio-MAE minimizes the mean square error (MSE) on the masked portion of the reconstruction and the input spectrogram. The method achieves new state-of-the-art performance on six audio and speech classification tasks. It also outperforms recent models that use external supervised pre-training.
Audio-MAE overview architecture. Figure source: Huang et al. (2022)
We have already seen huge success of masked autoencoder (MAE) applied to computer vision. Huang et al. (2022) proposes an approach, Audio-MAE, that uses a simple extension of MAEs for audio self-supervised learning applied. It builds on top of a Transformer encoder-decoder, where audio spectrogram patches are encoded with a high masking ratio (80%). The decoder is then designed to re-order and decode the encoded context padded with mask tokens with the goal to reconstruct the input.
Audio-MAE minimizes the mean square error (MSE) on the masked portion of the reconstruction and the input spectrogram. The method achieves new state-of-the-art performance on six audio and speech classification tasks. It also outperforms recent models that use external supervised pre-training.
✋热门推荐