Back to Blog
·Rosie·2 min read·Archive 2022

News in the World of AI

What's new in the world of AI? We have successfully completed another phase of DigiHavel! During the pilot, students and teachers posed thousands of unique and often very sophisticated questions….

News in the World of AI

What's new in the world of AI? We have successfully completed another phase of DigiHavel! During the pilot, students and teachers posed thousands of unique and often very sophisticated questions. Thank you very much. Thanks to them, the next version will be even smarter. I am continuing my little lecture tour, with the next one scheduled for 1st December at VŠE.

News in the World of AI

But the scientists at Meta AI (Facebook) have not been idle either, as they launched a demo of a large language model named Galactica in mid-November. This model is intended to store, combine, and reason about scientific knowledge. Its strong point is said to be technical knowledge, such as equations in LaTeX, mathematics, and similar fields. The model was trained on "106 billion tokens of scientific texts and publicly available data, including articles, textbooks, scientific websites, encyclopaedias, knowledge databases, and more. Just two days after the online demo was launched, the public model was halted following a wave of criticism from some scientists and the media. The model turned out to be a potentially dangerous generator of pseudoscientific nonsense. It is clear that there has been a misunderstanding of how these models function today, and many laypeople had inflated expectations of this model. Similar to GPT-3, it fabricates realistically-looking articles that authentically blend truth and fiction. Many people failed to grasp that the model's outputs should be taken as inspiration, tips, and recommendations, and it is advisable to verify them independently. I believe it can be a great tool if we know how to use it correctly. Just like with GPT-3, several models have been released here as well, ranging from mini with 125 million parameters to huge with 120 billion parameters.

I tried the standard Galactica model (even this mid-sized model with 7 billion parameters takes up nearly 30 gigabytes) on DigiHavel, and it "recommended" collaboration with the National Programme for the Development of Artificial Intelligence in the Czech Republic (NP AI), as well as with ČVUT, ČTU, and funding from MŠMT. It suggested I consider using the Dialogflow framework. For specific scientific questions, the standard model responds worse in accordance with the paper than the largest GPT-3. It's a pity that it's no longer so trivial to try out the huge model.

You have probably also noticed the new image generation model, Stable Diffusion v2. The original SD V1 model from CompVis was unique primarily for its open-source code and has spawned hundreds of different models and improvements worldwide. What distinguishes version 1 from version 2? I particularly appreciated the resolution. The output can now be in a default resolution of 768×768 pixels. Stable Diffusion 2.0 now includes an Upscaler Diffusion model that quadruples the image resolution. Furthermore, it should make it more difficult for users to imitate the styles of specific artists or generate NSFW (nudity, pornography, etc.) outputs.

Sources:

1.) https://www.seznamzpravy.cz/clanek/tech-techmix-po-protestech-vedcu-meta-novou-ai-vypnula-dva-dny-od-spusteni-219927?fbclid=IwAR2HQcgs2gthR2XJJb6cXmTsBo5yhf9js4G7UyBlHNV0_vXV8IlJ5-rs6Ec

2.) https://paperswithcode.com/paper/galactica-a-large-language-model-for-science-1?fbclid=IwAR0qYYTUg7KuUGd5xF56GsZAm56F3g1BXlQ9pc84G2I2dMFEY2RpSOuTLhU

3.) https://www.youtube.com/watch?v=ZTs_mXwMCs8

SD2:

1.) https://huggingface.co/stabilityai/stable-diffusion-2/discussions?fbclid=IwAR39MH9ZdtWrskbkarJlPRSqjaEWU6s3VJbVHGb1h8fGdvioP1vvpPUBYi4

2.) https://www.youtube.com/watch?v=HIak2kthBWQ

Původní zdroj: wordpress

Související články