OpenAI GPT-4 Multimodal Model
OpenAI just introduced GPT-4, their latest multimodal model capable of passing the BAR exam as well as the SAT. Multimodal basically means that the AI can use images and text prompts to generate content, while ChatGPT Plus members can also use the model for text-input.



During a simulation of the bar exam required before law school graduates can begin a professional practice, GPT-4 scored in the top 10% of test takers, versus the older model ranking around the bottom 10%. Similar to its predecessors, GPT-4 still has limitations and is not completely reliable, mainly due to it making reasoning errors. With that said, caution should be taken when using language model outputs, especially in high-stakes contexts, with the exact protocol matching the needs of a specific use-case.

Sale
Meta Quest 2 — Advanced All-In-One Virtual Reality Headset — 128 GB
70,519 Reviews
Meta Quest 2 — Advanced All-In-One Virtual Reality Headset — 128 GB
  • Experience total immersion with 3D positional audio, hand tracking and easy-to-use controllers working together to make virtual worlds feel real.
  • Explore an expanding universe of over 500 titles across gaming, fitness, social/multiplayer and entertainment, including exclusive releases and...
  • Enjoy fast, smooth gameplay and immersive graphics as high-speed action unfolds around you with a fast processor and immersive graphics.

OpenAI GPT-4 Multimodal Model

GPT-4 can accept a prompt of text and images, which—parallel to the text-only setting—lets the user specify any vision or language task. Specifically, it generates text outputs (natural language, code, etc.) given inputs consisting of interspersed text and images,” said OpenAI.

Author

A technology, gadget and video game enthusiast that loves covering the latest industry news. Favorite trade show? Mobile World Congress in Barcelona.