Loft founder Gregor Mittersinker speaks with Time Martin of Yseop on the AI Uncovered podcast about AI, ChatGPT, and their impact on UX and Design
By:
Gregor Mittersinker
November 20, 2023
This transcript presents an episode of the "AI Uncovered" podcast featuring Gregor Mittersinker. Earlier in the year, we engaged in a fascinating dialogue with Tim Martin, Head of Product at Yseop. He explored the impactful world of generative AI and machine learning, and their roles in driving innovation within regulated industries. As the AI software market is on track to hit an astounding $14 trillion by 2030, our podcast series features enlightening discussions with various industry innovators, offering insights into the rapidly evolving tech landscape.
Tim Martin, the Executive VP of Product at Yseop, not only hosts this engaging series but also leads a global team, bringing his vast expertise in managing the intricate and fascinating world of product development. Join us as we explore groundbreaking conversations that shed light on the future of AI and its impact across diverse sectors.
Tim Martin: Welcome to "AI Uncovered," where your host, Tim Martin, who also leads product at YSEOP, a company innovating in AI software, takes you on an exploratory journey into the realm of artificial intelligence. This podcast aims to demystify AI, delving into the market, technology, and real-world applications by engaging with experts and practitioners in the field. In our upcoming episode, Tim is joined by Gregor Mittersinker, the founder and creative director at LOFT, a product development and design firm recognized for its integration of UX, engineering, and data science to create expansive product ecosystems. Gregor is celebrated in the design world, his career marked by accolades such as the IDSA Ida, Red Dot Good Design, and Core 77 Design awards. Bringing over three decades of product development and design expertise, Gregor holds in excess of 100 patents and has collaborated with renowned brands like Bose, 3M, Segway, and Seagate. With academic credentials that include a BA and an MA from the Technical University of Vienna and the University of Applied Arts in Vienna, Gregor also imparts his knowledge to students as an adjunct professor at the Rhode Island School of Design (RISD).
Tune in to gain insights from a seasoned professional who stands at the intersection of innovation, aesthetics, and functionality. Join us as we embark on a path of deeper understanding of AI with Gregor Mittersinker on "AI Uncovered."
Gregor Mittersinker: Thanks for having me.
Tim Martin: I'm excited to dive into the topic of how AI could influence user interface (UI) and user experience (UX) design. There has been a remarkable evolution in the principles and practices of UI/UX design over recent years, and it's important to establish a baseline understanding of these changes to appreciate AI's potential impact.
To begin, let's consider the most significant advancements in UI/UX design during the last decade. We've witnessed a shift towards more minimalist and flat design, emphasizing simplicity and the removal of unnecessary elements that may distract users. There's also been a greater focus on mobile-first design, acknowledging the predominance of smartphones as the primary device for internet access. Personalization has also taken center stage, with interfaces adapting to the individual user's behavior and preferences.
Accessibility has become a priority, ensuring that digital products are usable by people with a wide range of abilities. Another key development is the rise of voice user interfaces (VUIs) and conversational interfaces, marking a departure from traditional graphical user interfaces. Additionally, the integration of psychological principles through persuasive design has aimed to influence user behavior in a predictable way.
Understanding these milestones is crucial when considering AI's role in further advancing UI/UX design, potentially automating design tasks, personalizing user experiences at scale, and introducing new paradigms for human-computer interaction.
In the past decade, there has been a transformative shift in the field of user experience (UX), emphasizing a human-centric approach.
Gregor Mittersinker: In the past decade, there has been a transformative shift in the field of user experience (UX), emphasizing a human-centric approach. This approach, often referred to as human experiences (HX) or customer experience (CX), places the individual at the core of design, acknowledging the complexity of modern systems and the need for simplification to cater to human needs. To navigate this landscape, a service design methodology is employed, which offers the appropriate framework for developing large, integrated ecosystems. Successful products from this era have distinguished themselves by offering simple and intuitive solutions that cleverly conceal the intricate technical details beneath.
One of the most notable changes in recent years is the significant increase in computing power – with contemporary devices boasting capacities a thousand times greater than those a decade ago, such as comparing the first iPhone to current models. This advancement enables more sophisticated applications and functionalities. A key development in this domain is conversational user interfaces (UIs). These adaptive interfaces leverage advancements in natural language processing (NLP) and natural language generation (NLG) to facilitate more natural spoken or written language interactions. Examples include digital assistants like Alexa and Google Assistant, as well as platforms like ChatGPT.
In the sphere of conversational UI, several subcategories have seen rapid evolution.
Conversational Interfaces — These systems either interpret human text or generate written responses, serving as the backbone for AI-powered communication tools.
Voice-Enabled Applications — The concept extends to smart home technologies, allowing users to interact with their environment through voice commands.
Messaging Apps — These have become ubiquitous for their conversational UI, becoming a primary mode of digital communication.
Virtual Agents — Often referred to as chatbots, these can perform automated support tasks, assist with reservations, or aid in purchasing processes.
Augmented Reality (AR) Interfaces — Although not fully matured, AR interfaces present information overlaid on live or video feeds, promising an immersive user interaction experience.
These developments are intrinsically linked to enhancements in graphics processing units (GPUs) and overall performance capabilities, underlining the symbiotic relationship between technological advancements and UI/UX innovation.
Tim Martin: Yes, I find this discussion quite interesting, especially considering how concerns about the human-machine interface have evolved. Previously, the focus was on layout and color choices to facilitate easy human interaction with machines. More recently, however, we've seen advancements like the ability to converse with machines or to use augmented reality to overlay digital information onto our physical environment, whether through AR glasses or other technologies. These developments are steering things in a new and exciting direction. While older methods are not obsolete, there are now more options and ways for humans to engage with machines.
Regarding the widespread recognition of GPT, a few key developments have contributed to its popularity. For one, the adoption of a prompt-based interface, aligns well with current interface design principles. Additionally, the integration of reinforcement learning has rapidly improved the models' capabilities. This leads to the question: What impact is ChatGPT having on individuals involved in UI design, or on your clients who might be asking new questions in light of recent global technological advancements?
Gregor Mittersinker: Yes, I think the call-and-response type of interface has been around for some time. What really makes ChatGPT stand out? We could also discuss Bing, as its language model is similar or employs a comparable philosophy. The key difference is that the responses are very intelligently tailored and minimal. Achieving this requires a tight integration among the UI team, UX team, and coding team to create a seamless experience. It's important to have content creators on staff to make it work effectively.
Take, for instance, an interview with the founder of ChatGPT. He mentioned that one of the most challenging aspects was defining the persona of the respondent. What is ChatGPT's persona when it responds? What does it feel like? That's a significant amount of work. Getting that right is where the magic happens on their side.
Tim Martin: Oh, yes. It's also intriguing how you can prompt engineers in such a way that it leads to varied responses. You can establish different personas, which adds an additional layer of complexity. Could you discuss practicality about LLMs today? What are people considering when implementing LLMs, perhaps in enterprise applications or other consumer applications? I'm curious about their thought process. What do they need to consider if they're planning to integrate LLM elements into their software?
When developing LLMs, it might be beneficial to shift focus from the end result to the experience itself. As a UX designer, this shift is quite apparent as we see people adopting new products with a toolkit like this.
Gregor Mittersinker: Yes, I would say that LLMs are essentially a sophisticated toolkit. They offer developers the opportunity to respond in more nuanced ways. As we discussed earlier regarding the persona of a prompt, it's all about sentiment analysis. For example, if you ask an engineering question, it responds like an engineer, or if it's an emotional question, it responds like a psychiatrist. This aspect of sentiment analysis allows an LLM to detect the emotional state of the person asking the question more accurately.
This capability leads to the notion of AI potentially taking over the world – it's about forming some sort of empathetic connection to the questions being asked. Consider the example of a modified ChatGPT being used in emergency scenarios, like calling 911, and discerning whether someone is distressed or just stuck in a traffic jam. These nuances enable AI to be more effective in providing responses or ensuring that the answers are relevant.
I believe that human-reinforced learning models have significantly contributed to this development. But overall, sentiment analysis and the ability of an LLM to create nuanced, relevant content are the key areas where I see a significant impact. When developing LLMs, it might be beneficial to shift focus from the end result to the experience itself. As a UX designer, this shift is quite apparent as we see people adopting new products with a toolkit like this.
Tim Martin: We discussed content creation, which I find quite interesting. Clearly, Large Language Models (LLMs) are adept at this, capable of generating a significant amount of content. However, there are varied needs, particularly in regulated industries. When using tools like ChatGPT, it's crucial to be cautious because you don't want sensitive data leaking into the public domain. But if you use a secure LLM, somewhat like Bing's prompt-based user interface, there are different considerations.
One question I have is about the role of humans when content is created almost automatically by these models. Humans are still part of the loop, assessing the information provided by the model. These models may sometimes produce information that's not completely accurate, up-to-date, or truthful. So, how can we design user interfaces (UI) to instill confidence in users? Is it about providing evidence, similar to how Bing indicates the source of its information? Are there other mechanisms or strategies that UI designers are using or exploring to address this? I'm curious to know your thoughts on this.
Gregor Mittersinker: We've been discussing industry-specific applications. Recently, I spoke with a friend who works at one of the world's largest insurance companies and leads their innovation lab. He mentioned that they had to restrict ChatGPT because various divisions were using it with proprietary information. However, they are now retooling their approach to create an industry-specific intranet version of ChatGPT. This model would be able to learn from the vast resources and knowledge within a large corporation. I think the first step of making the tool internal and ensuring it's used only within the company is a wise move. This can be achieved through a combination of policy and technology. Additionally, leveraging pre-built toolkits tailored for specific applications could be beneficial. This approach seems to be a practical way to utilize ChatGPT while safeguarding proprietary information.
Tim Martin: Yes, I think it's interesting to note the situation with Samsung. Some critical data reportedly left Samsung's secure environment and was released into the public domain via ChatGPT. This raises concerns as we consider the integration of interfaces like GitHub Copilot or Microsoft's plans to add LLM functionality to all their applications, including Word and Excel. One can imagine people using these prompt-based interfaces and inadvertently sharing sensitive data, simply because they are accustomed to these tools without fully considering the implications.
This situation presents an intriguing dilemma. For me, the next question is about the increasing reliance on prompt interfaces, especially since the advent of ChatGPT. Are people now over-relying on these interfaces? Do they believe that prompt interfaces will solve a multitude of issues, even when they might not be suitable for many applications?
Gregor Mittersinker: I think that many conventional interfaces have been built around the functionality of the internet, with hyperlinks significantly shaping our worldview. With the rise of LLMs, there's a shift towards more conversational user interfaces (UIs). This change allows users to interact more naturally, unlike the frustrating voice prompts of older technologies like my old Honda's system, which often failed to understand simple commands.
This evolution is why there's a trend towards prioritizing prompt-based interfaces. For industry-specific applications, having a conversational UI could be incredibly empowering. Imagine a text formatting function that could be activated with a voice prompt, like replacing all instances of "Larry" with "Karen," instead of manually searching and replacing text. This perceived 'magic' is something designers can capitalize on.
There are two main aspects here: the potential of conversational UIs and the power of LLMs. Another consideration is the impact that LLMs and AI are making. As designers, we must ensure that these technologies do not lead to significant pitfalls. One challenge with ChatGPT is its lack of transparency; users often don't know where the data is coming from, leading to a 'magic factor' in its use.
However, in industry-specific platforms used for daily work, the ability to understand what's happening 'under the hood' is crucial. Most AI platforms targeting specific industries are addressing this need effectively. They focus on providing their customers with confidence that their models work reliably and transparently, with understandable and controllable biases. This transparency and control are essential in industry-specific AI applications.
Tim Martin: Your points resonate with me, particularly regarding the probabilistic nature of these models. You don't always get the same answer twice. Additionally, concerns about explainability, transparency, and bias are paramount, especially in industry-specific and enterprise contexts where the standards are higher.
Users of tools like content generation AI need to feel confident in the results they receive. Questions like, "Am I satisfied with the result? Is it truthful? Can it be explained?" are crucial. This ties back to what I mentioned earlier about the importance of evidence or other methodologies in interfaces. These features enable users to quickly assess generated content, understand its source, validate it, and then proceed with confidence. Regarding your inquiry about current approaches, it's a pertinent question. Are these the strategies being adopted, or are practices more varied? Perhaps it's still too early to say, as people are still navigating and understanding these new territories. So, do you have any insights into this? Are these the approaches being taken, or is the field still evolving with varied methodologies?
Gregor Mittersinker: I see a gap emerging, similar to what happened when multi-touch technology was introduced with the iPhone. Touchscreens had been around for some time, but the iPhone set a new standard with its multi-touch capabilities. Suddenly, every industry, whether it was manufacturing cash machines, kiosk interfaces, or medical devices, was held to a higher standard. Users no longer wanted to just tap on screens; they wanted the ability to zoom and interact more intuitively. I believe we are witnessing a similar trend with industry-specific AI systems. They are likely to be held to the standards of elegance and simplicity set by interfaces like ChatGPT. It's not just about performance but also about how these systems present themselves as simple, elegant, and conversational. This is certainly a challenge. The smart approach would be to focus on creating a user experience that resonates emotionally with users in a similar way, affecting them on an emotional level just as the iPhone's interface did.
Everyone began to adopt and standardize around these capabilities. So, it seems like you're expecting a similar process to occur with prompt interfaces and other similar technologies.
Tim Martin: That's a great historical example. I remember when the capacitive touch wheel on the iPod was introduced; everyone was fascinated by it, leading to a surge in interest in capacitive touch technology. Then came the iPhone, which featured a screen that supported multi-touch, and even the force touch feature, which I personally haven't used much. This became a part of the interface experience across devices, regardless of whether they were Android phones or iPhones. Everyone began to adopt and standardize around these capabilities. So, it seems like you're expecting a similar process to occur with prompt interfaces and other similar technologies.
Gregor Mittersinker: Our design team has rapidly adopted AI technologies. We use AI every day in our work, and obviously, in the field as well, as we collaborate on various projects. I believe this trend of rapid adoption will be the same across all areas. I'm interested to see how this unfolds.
Tim Martin: I'm interested in understanding which resources or tools you think could be beneficial for UI/UX designers. You have a team working on these technologies; how do you see these tools changing the way they work? How will these tools assist them? Also, considering that UI and UX designers are creative people who currently have full control over their designs, how do you think this whole process will change from your perspective?
Gregor Mittersinker: There are two aspects to consider: working on AI-driven projects, like those involving LLM models, and using AI to assist in design work. I'll address both.
Firstly, how LLM and AI will change the way we work as UI/UX designers is likely to be significant. I anticipate rapid adoption because these technologies will fundamentally alter our workflow. This isn't just about auto-layouts or similar features, but also about the increased demand on development teams to collaborate closely with UI teams to create something exceptional. UI/UX designers will play a crucial role in representing the customer's voice within the technology stack, focusing on elements like sentiment, reaction, and performance of the AI. It's about a tight integration of code and interface. Questions arise about what kind of interface is appropriate – should it be conversational, or a mix of conversational elements with selectors or sliders, perhaps even emotional sliders?
Secondly, AI is likely to free up designers from routine tasks, increasing efficiency. Those who leverage this efficiency effectively will likely succeed. AI offers the opportunity to focus on aspects of projects that we currently may not have time for. For example, the nuances of a project or improving the UX could benefit significantly from AI. Currently, our work is somewhat defined by the limitations of the canvas we're building on, but with AI, we can delve deeper into that canvas, exploring new dimensions and possibilities.
Tim Martin: I'm not entirely sure if this is the correct approach, but I can envision utilizing AI early in the creative cycle, before any designs are finalized. You could collaborate with an AI to rapidly generate concepts and iterate on them with a client, establishing baseline directions quickly. Then, an experienced UI designer could take over, refining the details and guiding the process toward a more specific and nuanced outcome. Does that sound like a feasible approach, or do you envision it differently?
Gregor Mittersinker: Yes, I agree with your perspective. AI provides the capability to quickly create and respond to design concepts. From there, you can identify the most successful path and develop it more comprehensively and in greater depth. Moving to the second part of my answer, as AI empowers us with the ability to be more nuanced, it also opens up possibilities to reimagine systems that are currently outdated. For instance, the future of hotel check-in experiences could be entirely transformed with the integration of AI.
Tim Martin: That needs to be next, by the way, that drives me crazy. I had a similar experience when checking into a hotel overseas. They had a user-friendly interface for the initial steps, but then the process reverted to the traditional, tedious method for the rest of the check-in, which was frustrating. It felt like they only completed part of what was needed; the entire process should have been automated, with a proper backend system to support it. This is indeed a prime example of how AI and new techniques, perhaps even some form of prompting, could significantly improve the experience. Regarding tools, I've noticed that Adobe is integrating AI into their products. Are there any specific tools currently available that are particularly meaningful for UI/UX professionals, or are you still in the process of evaluating various tools?
Gregor Mittersinker: You're right in saying that UX design, when viewed as an onion with multiple layers, actually constitutes only a small portion of the entire process. We use various tools like MidJourney and ChatGPT, among others, to guide us toward our desired outcomes. As for layout tools, they're not quite there yet. Simply pressing an 'Auto Layout' button won't yield the desired results, as our work is nuanced and technical.
However, we do use common tools daily as we adapt to new technologies. For tasks like icon development, we delve deep into AI toolkits because they offer designers a starting point. You can generate a lot of options quickly and then decide which ones are worth refining. This approach is more efficient than starting with a blank canvas, which can sometimes lead to getting stuck. Let's continue exploring how these tools can be utilized effectively in our design processes.
Tim Martin: When discussing recommendations for companies developing software with AI or LLMs as part of their solutions, I would offer a couple of suggestions on moving forward and building a process that ensures optimal results.
When integrating LLMs into a UI or product solution, especially in product development, it's crucial to have a close collaboration between the development and UX teams.
Gregor Mittersinker: When integrating LLMs into a UI or product solution, especially in product development, it's crucial to have a close collaboration between the development and UX teams. In a conventional UX/UI development process, you typically start by creating a workflow, then move on to designing a layout in Figma, and finally progress to the development sprints. However, with LLM integration, you might need to think in reverse: start by building the technical core stack, and then focus on refining the LLM opportunities to create nuanced solutions, such as a conversational UI. This involves testing how it works in a small sandbox before expanding it to the rest of the system. Another critical aspect is the training of the LLM or generative AI solutions. This training is as important as building the system itself, so you essentially have to build the product and then build it again. This needs to be a key consideration in roadmap planning because the model's training is equally crucial to the success of the system.
Tim Martin: That's a valid point. Integrating AI into a system is indeed wonderful and beneficial, but it's also crucial to transition it smoothly into production. This requires a process that doesn't disrupt or cause issues every time there's an upgrade or a new version released. Managing this aspect can be quite challenging.
Gregor Mittersinker: That's an important consideration, especially for our clients who work in industry-specific applications. They often find that a general API like ChatGPT isn't suitable for their needs. Even if security concerns are addressed, the issue is that ChatGPT is too broad. For their purposes, the model needs to be pre-trained and highly specific to their industry or application. While an open-source platform may be useful for building a prototype, utilizing a pre-trained model is essential for developing an industrial-grade, market-ready product.
Tim Martin: That approach makes a lot of sense. We're indeed working on something similar. When using a prompt-based interface, it's crucial to be aware of the potential vulnerabilities, as open prompt interfaces can be exploited to manipulate LLMs. The prompt, training and the LLM model used are all interrelated, requiring careful management to ensure that the final product delivers the right experience for industry-specific software. This brings us back to the concept of trust and confidence in the accuracy of the results provided. Regarding the future, I'm familiar with Ray Kurzweil and his predictions as a futurist. His discussion about the potential sentience of AI by the end of the decade and developments like Neuralink, which aim to directly connect the human brain to computing systems, are fascinating. Looking 10 years into the future, it's intriguing to consider how the interfaces we're discussing today might evolve. How might these advancements change the landscape of UI/UX design in the next five to ten years?
Gregor Mittersinker: I think that neural interfaces, which are not new, have been around for a long time. They are becoming more sophisticated and increasingly productized.
Tim Martin: The issue is bandwidth, right? I think in the past, there wasn't enough bandwidth to make that interface practically useful. Yeah, okay.
I believe there will be some great treatment models enabled through neural interfaces. If we can connect directly to the brain, many diseases could potentially be eradicated more easily.
Gregor Mittersinker: I believe there will be some great treatment models enabled through neural interfaces. If we can connect directly to the brain, many diseases could potentially be eradicated more easily. Neurology could be completely redefined, impacting conditions like epilepsy or even obesity. Predictive models suggest that by the end of the decade, conditions like epilepsy might be curable through such interface models. I'm not overly concerned because I believe there is potential to use this tool in beneficial ways. Of course, there might be negative aspects, just as social media has had a significant impact on our society in ways we never anticipated.
Tim Martin: Well, yes, I think about how people talk about social media. It was the first time advanced AI was broadly used with human beings. Some even call it a failed experiment because it optimizes for things that aren't human-centric. It focuses on engagement rather than what's good for humans. That's an interesting dilemma. We're likely to encounter more dilemmas like this, questioning what AI is optimizing for and whether it aligns with human needs. This is a significant issue because if these models become super intelligent and aren't aligned with human values, they could pose an existential threat. I don't believe we're there today, but it's a topic many discuss. Returning to the interface aspect, do you envision a moment in the near future where perhaps your phone becomes obsolete, and you have a chip in your head, doing all the things you currently do on your iPhone or Android device, but in a more direct way, without the need for these devices?
Gregor Mittersinker: I'm sure it's coming. That's essentially just a technology roadmap at this point, right? As a designer, I always look at the underlying emotional values that technology provides. For example, a smartphone is really about connectivity and access to information. If another interface can enable that faster and more efficiently, there will definitely be a place for it.
Tim Martin: Well, I appreciate all the insights. It's time for the "AI Uncovered" fast-fire round. I'm going to hit you with four quick questions and get some quick answers from you about your thoughts. First question: Do you have a favorite book that incorporates AI or talks about AI?
Gregor Mittersinker: The "Children of Time Trilogy," particularly the book, "Children and Ruin," which delves into AI, is a book I would highly recommend everyone read. It's an incredible book. I'm not typically a sci-fi guy, but this is a sci-fi novel that's truly wonderful. It offers a glimpse into the future of mankind, and it's interesting because it's written by a biologist. Adrian Tchaikovsky is a notable author in this genre, but what sets him apart is his background in biology. This allows for a reimagining of the future through a biologist's lens, which is fascinating.
Tim Martin: Yeah, like that. I'm a sci-fi fan. So I'll definitely read that one. How about a movie? Like a favorite movie that incorporates AI?
Gregor Mittersinker: Oh, that's easy. Blade Runner. I think, first of all, Ridley Scott is a great director. His work is truly timeless. You can watch his movies now, and even though the special effects might seem somewhat dated, they still hold up pretty well. This is largely because the stories are great and written really well, with excellent character development. Yeah, I think his movies are real winners.
Tim Martin: Okay. And then how about an application, any AI applications out there that are watching you right now, and that you're using on a daily basis?
Gregor Mittersinker: I think ChatGPT really has the ability to impress due to its simplicity, especially for us in the industry. It's a bit like, "Okay, they've done something right." I think if they nailed one thing well, it's the interface. They've done a really good job with the nuance of the responses and the simplicity of how they react, which is very surprising for your average consumer. And you mentioned Bing – I think Bing is also using the framework in a more conversational, two-way manner, which is quite powerful. However, I wouldn't say it's my favorite application. I like using MidJourney if you're familiar with it; it's a art generator. But all these tools are still in beta, in my opinion. If you really think about it, the reason why these tools are open source is because they still need a lot of development.
Tim Martin: Well, it seems like everybody is just slapping an interface on top of ChatGPT and trying to turn it into an app, which isn't as useful as it could be. Right, exactly, it's still the early days. But I'm curious – you use ChatGPT quite a bit, it sounds like. Have you been able to measure any efficiencies gained as a result of using the app? Are you, for example, 10% more efficient on a daily basis? What do you think?
Gregor Mittersinker: I think my approach to work has changed. For example, I wrote an article last month about generative AI and language generation models. I think I sent it to you, though I'm not sure if you had a chance to read it. It involved a lot of research into language models and understanding their nuances while also forming a point of view. Traditionally, you could achieve this through Google searches, but using ChatGPT to almost 'road test' these answers, asking it for feedback on my thoughts, has been helpful. It's almost like having a knowledgeable sounding board, right? Yeah.
Tim Martin: Yeah, we're all prompt engineers these days. What's your prediction for the date of the singularity, at which AI becomes sentient?
Gregor Mittersinker: I don't think it will happen in the way we anticipate. We might think it's about the singularity, which may have already occurred – who knows? It's always a subject of endless discussion and speculation. I think we'll look back and say, "Well, it's one of those things where we'll look back and realize, 'Oh, this is when it occurred,' but by no means will we be able to predict exactly when it happened or what it looked like." Great.
I love UI UX. I love what you guys do. I really appreciate the insights. And I'm sure our listeners enjoyed the show. So thanks so much, and we look forward to catching up soon.
Tim Martin: Well, Gregor, this has been awesome. I love UI UX. I love what you guys do. I really appreciate the insights. And I'm sure our listeners enjoyed the show. So thanks so much, and we look forward to catching up soon.
Gregor Mittersinker: Sounds like a plan. Thanks, Tim, for the opportunity. Appreciate it.
Tim Martin: I want to thank Gregor for a fascinating discussion about user experience and UX design today. We talked a lot about ChatGPT, AI, and their impact on user interfaces and designs, specifically focusing on conversational UI and prompt interfaces. We also discussed how industry-specific applications might use a variety of UI techniques to address the nuances of AI. Additionally, we touched on the future of user interface design. It was an awesome show. I hope the listeners enjoyed it, and I look forward to seeing you on the next podcast.
Previous
Next
AI
Data