I had the pleasure of speaking to the professors at the Faculty of Construction about practical applications of Artificial Intelligence. It was an amazing opportunity to discuss efficiency, optimization, and how AI can transform interior design and data analysis.
A big thank you to Anula Cosoiu for the invitation and the chance to share these ideas!
My slides follows (Romanian). See below for an English summary of my presentation (and also the complete two hours transcript).
Here's the full video (Youtube) (Romanian). Full English transcript after the video (automatically generated from video).
My speech summary
Introduction
- Scenario: A client requested 100 interior design variants in 20 styles (5 variants/style).
- Challenge: Completing this using traditional tools would take too long.
- Solution: Leveraged AI (ChatGPT) for faster results, completing the task in 145 minutes at 0.046 cents/project.
Main Ideas
1. Interior Design with AI
- Process: Started with a blank room, iteratively added designs in styles like Bohemian, Scandinavian, Modern, etc.
- Technical Fix: Used "inpainting" to restrict changes to specific areas while preserving room integrity.
- Example: Adjusted a Bohemian-style room by replacing a plant with a painting in seconds.
2. Limitations
- Issues:
- ChatGPT only allows inpainting on AI-generated images.
- Difficulty in handling edits for real photos.
- Solutions: Use specialized apps for inpainting and interior design.
3. Data Analysis and Optimization
- Scenario: Bridge sensors collected data (e.g., displacement, deformation, temperature, traffic volume).
- AI Tasks:
- Detected outliers via Python scripts auto-generated by ChatGPT.
- Created visualizations like box plots and histograms for data interpretation.
- Suggested correlations between variables (e.g., strain vs. vehicle weight).
4. Brainstorming and Creativity
- AI explored creative solutions for challenges like product packaging, layout optimization, and marketing strategies.
- Example: AI agents proposed innovative retail inventory solutions, such as spring-loaded platforms and QR-code alerts.
5. What AI Is and Isn’t
- Misconceptions:
- AI is not a database or a conscious entity; it processes numeric representations (latent space).
- Capabilities:
- Predicts patterns and generates outputs based on training.
- Excels at emulating creativity and assisting repetitive tasks.
6. Practical Use Cases
- Applications for AI:
- Writing code.
- Summarizing content.
- Translating with context.
- Generating tailored prompts.
- Enhancing customer support (via ReAct or CoT prompting).
Call to Action
-
Start exploring AI responsibly:
- Use premium models for better reliability (avoid free/low-quality versions).
- Focus on effective prompting techniques (e.g., Chain of Thought, self-consistency).
-
Free AI Course: Visit curs.viorelspinu.com for lessons on practical AI use.
Key Message
AI isn’t here to replace us—it’s here to enhance creativity, save time, and unlock new opportunities.
My whole speech (2 hours)
Before I start, I want to tell you a little story because I think it’s very relevant to this area of AI applications in various domains.
Some time ago, an architect friend came to me and said, "Hey, listen, I’ve got a very important client whose daughter is moving somewhere new, and he asked me to design 100 different interior decoration options for a room in 20 different styles. If I do this using AutoCAD and my current tools, it will take me a long time. With your AI, do you have a solution?"
I said, "I don’t know. I mean, I know AI can generate images, but I’ve never worked in the interior design field. But let’s try together."
His requirement was straightforward: his client’s daughter came in with a list of 20 different interior design styles and asked for five variations for each style. “Let’s see what we can come up with.”
This became my first experiment, and I’ve been playing with AI for quite a while. I absolutely love it. I don’t think AI will destroy us; I believe it will be good for us. That’s my opinion, and I hope to convince you of the same tonight.
Interior Design – First Scenario: Efficiency and Visual Attention
We started with an empty room. To decorate it, I obviously needed a blank canvas. I asked ChatGPT to imagine what an empty room fitting the client’s daughter’s vision would look like. It described the room in text (you can see it here), and then I asked it to create an image based on that text. The result was this blank room.
"Great, we have the empty room. Now let’s decorate it according to the 20 interior design styles, with five variations per style."
The first approach was to take the room and ask it to decorate it in a specific style. What happened? It added doors and windows, changing the room itself—it was no longer the same room. I quickly realized I needed to use something called inpainting.
I took the image and selected only specific areas where I allowed the AI to make edits. The blue zones in the image were the editable areas; the rest stayed unchanged. This constraint forced the AI engine to maintain the room's integrity.
For example, I asked it to decorate the room in a Bohemian style but forbade it from adding new windows or doors—because who knows how creative it might get? The result was this.
Technical Issue (Speaker adjusts their microphone.)
We have our first Bohemian-style room. I said, "It’s good, but I don’t like that plant. I’d prefer a painting instead." I selected the plant, asked it to replace it with a painting, and in just 10-15 seconds, it made the change.
Here’s the room before, with the plant, and here’s the room now, with the painting.
Next, we moved to Scandinavian style. The process was exactly the same. At the bottom, I told it, "I want a Scandinavian style," then "another one," and "another one," because I didn’t like the first result. Each time I pressed enter, it gave me a new version in 2–5 seconds.
The same with the Industrial style—five versions.
The Modern style.
Back to Bohemian style.
The Farmhouse style.
Now let’s try something more complicated. I told it, "You know, my son is 15 and loves Sci-Fi. How would you design a room for him?" It described in text how it would decorate the room, and then I asked it to apply that design to the image. Again, I specified that it couldn’t modify the windows or doors.
Here’s what it came up with. If my son had been there, I’d have asked him, "Do you like it? Is it okay?" And if he didn’t like it, we could iterate together. "Okay, you don’t like it? Fine, move the Picard poster to the left of the door."
I could have these discussions with ChatGPT and iterate on the design until my son or client was satisfied. Each iteration took no more than 10 seconds.
Here’s what my ideal room would look like. This is how I’d want it.
In total, I created 100 variations in 145 minutes. My architect friend said it would have taken him a bit longer using the classic approach.
Cost Calculation
Out of curiosity, I calculated the costs. For this whole scenario, you’d need ChatGPT Premium, which costs $20 a month. Over three days, with 24 hours in a day and 60 minutes per hour, that comes to $0.046 for the entire project.
Basically, it’s free.
This demonstrates how AI can help us do things differently. The traditional way of solving this problem involves specialized interior design tools, requiring significant effort to produce 100 design variations. Here, you can achieve the same result—100 designs in 145 minutes. It’s a different way of doing something ordinary.
But there are limitations.
For example, ChatGPT only allows inpainting on images it has generated. If you upload a real photo of your room, it won’t let you edit it. That’s a bit inconvenient if you want to modify an actual photo. There are other solutions for this, which I’ll explain later.
Sometimes, ChatGPT can be tricky and frustrating. For instance, I asked it to remove a window from an image. It said, "Sure, I’ve removed the window," but the window was still there. It can be convinced eventually, but sometimes it’s difficult.
Solutions for these limitations? There are dedicated apps for inpainting. ChatGPT is a general-use tool—it can do everything from cooking recipes to Python code to interior design—but it’s not specialized. Dedicated apps for inpainting allow you to upload any image, such as a photo of your room, and request modifications.
There are also dedicated interior design apps made by companies—not OpenAI—that specialize in this area. Why didn’t I present one of these tonight? Two reasons:
- If I had spent time picking the best app and presented it here, chances are a better one would appear before I walked out the door. Things are changing very rapidly.
- Many low-quality apps in this space exist solely to make money. It takes significant effort to find a reliable, high-quality app.
Now, I know what some of you might be thinking. What about actual photos? Real rooms? Can I use AI with those? The answer is a bit complicated. ChatGPT, as of now, doesn’t work directly with photos in the way we’d like. It can’t take a picture of your actual living room and make edits to it—not yet. But as I mentioned earlier, there are dedicated tools for this.
Scenario Two: Optimizing Layouts
Let’s change gears a bit. Interior design isn’t just about styles and decorations; it’s also about layouts. Have you ever wondered:
- "Where should I place my couch for the best use of space?"
- "Can I add a desk here without making the room feel cramped?"
For this kind of task, AI shines just as brightly. I’ll show you.
Imagine you have a studio apartment. It’s small, and you want to optimize the layout for comfort and functionality. I fed the dimensions of the space to ChatGPT and asked it for layout suggestions. What it gave me was a textual description:
- “Place the bed in the back-left corner for privacy. Position the desk near the window for natural light. The couch can go along the right wall, facing the center for a welcoming feel…”
This wasn’t just random guessing—it considered the flow of the room, natural lighting, and even access to power outlets. Pretty smart, right?
From there, I used AI-powered image tools to visualize these suggestions. Here’s what that looked like:
Insert image of the layout suggestion here.
Limitations
Now, was it perfect? No. AI isn’t an architect or a professional designer—it doesn’t account for every little detail, like specific furniture sizes or individual preferences. But it gets you started, fast. That’s the key: speeding up the initial phase of planning.
Scenario Three: Mood Boards and Inspiration
Next, let’s talk about mood boards. Designers love them, and clients often demand them.
You know the drill: you want to renovate a space, so you spend hours scrolling Pinterest, saving images that match the vibe you’re going for. AI can simplify this.
I gave ChatGPT a prompt: “Create a mood board for a Scandinavian-style living room.” Within seconds, it provided me with this collection of images and suggestions:
- Neutral color palettes: whites, grays, and light woods.
- Minimalistic furniture with clean lines.
- Textures: wool throws, knitted poufs, and soft rugs.
The second scenario tonight refers to data analysis, optimization, and efficiency.
The scenario we’ll begin with is this: I have a bridge equipped with sensors that collect data. These sensors include three that measure the displacement of bridge elements relative to one another, three strain sensors, a temperature sensor, a traffic sensor that counts the number of vehicles per hour crossing the bridge, and a sensor that measures the weight of heavy vehicles. Vehicles exceeding a certain threshold are classified separately as heavy vehicles.
These three sensors are left to measure for three days. In my example, I assumed they measure for three days, resulting in an Excel file like this, containing the data from the sensors. Now, without using AI, you can process this data. The first thing I’d ask myself is whether there are any outliers here—values that are off because a sensor might have had a small issue and recorded, for example, a temperature of 20°C during summer. Could such values exist? And okay, I’d have to write Python code or use some dedicated applications to remove those outliers, find the outliers, and work with them.
Well, I’d like to show you how you can perform this data analysis for this Excel file using ChatGPT. It’s something very accessible and simple because I can talk to it in Romanian. I’ll show you how we can obtain results similar to what I’d get by writing Python code, but much faster. The difference is that I don’t write the Python code myself; someone else does it, and the process is much quicker.
I generated such an Excel file using ChatGPT in a different conversation. I asked it to generate such a CSV file with data, uploaded it into ChatGPT, and asked it to look over the data and determine if there were any outliers. The first step it took was writing Python code to understand what was in that file. Then, it wrote the Python code for detecting outliers, ran it, and showed me these graphs. These are box plots for each of my sensors.
This chart tells me that the median value is where the red dot is, 75% and 25% are represented by the thicker sections to the right and left, and the whiskers show the values still within the range on the far left and right. Here, we have our first outlier: 10, in conditions where the values range between 1 and 3. This value of 10 is probably an outlier, and I can see it very easily. The same goes for strain, deformation, or temperature values, or for traffic volume per hour.
Let’s go over it again since I moved too quickly. I have the Excel CSV file, my data, which I upload into ChatGPT—literally, upload control, select the file, and it’s in ChatGPT. Then I ask, “Please analyze this data for me. I want to know if there are outliers.” And this is what I find very interesting. ChatGPT writes Python code to understand the data, effectively parses it into a pandas DataFrame, and displays it. But it doesn’t display it for me; it displays it for itself so that it can use the output for the following questions. When I ask it the next question, it will leverage what it generated here.
Then it writes the code to create those box plots to identify outliers. This ability to write code is fantastic because it gives me a high level of confidence that what it tells me is correct. One thing is for it to look at the data and tell me, “No, you don’t have outliers; you’re fine,” which I wouldn’t necessarily believe. But when I see that it wrote code, I can review the few lines of code, understand what it does, and then see the chart. At that point, I’m fairly confident whether or not I have outliers—it convinced me because it’s code I could have written myself.
Because I wasn’t entirely convinced I fully understood the box plots the first time, I asked it to tell me the exact number of outliers for each sensor. It did that, writing code that’s simple to understand. It checked my columns, identified what fit within the 25th and 75th percentiles, what fell within the whiskers, and counted the rest as outliers. Then it told me, for example, that for displacement 2, I had one outlier; for temperature, one outlier; and for traffic volume, one outlier. This matched what I had seen in the charts and made sense to me, so I said, “Okay, please remove the outliers.”
It was enough to say just those words in Romanian—“elimină outliers”—and it wrote the code to remove them. I could have written that code myself, sure, but it would have taken me at least 5–10 minutes to write, test, run, and ensure everything was correct. Meanwhile, here, it took 10 seconds at most. I simply said, “Remove outliers,” and it wrote the Python code, ran it, and gave me a new file.
However, I don’t trust AI entirely, and I suggest the same to all of you—don’t trust AI blindly. Verify everything thoroughly. So I said, “Great, you removed the outliers, but now let’s look at the new file without outliers. Count again and tell me if there are still any outliers.” It said there were no more outliers, which matched my expectations. By this point, I almost believed it. I had seen the code it wrote to detect outliers, it looked correct to me, I saw the new file, and I made it count the outliers again. It confirmed that there were none, so it probably wasn’t lying.
Then I asked it for the minimum and maximum values, and I wanted a histogram to see how the values were distributed. It gave me the histogram. Again, I could have done this myself, but it would have taken me some time. Even in Excel, I would have had to carefully select the columns to ensure I had the correct data, but here I just said, “Make me a histogram,” three words in Romanian, and I got the graph.
Next, I wanted a statistical analysis for each sensor, such as the median, mean, and other useful information. It generated that in about 10 seconds. I then told it I wanted to see significant variations in strain measurements over a rolling window of 24 hours. I wanted to slide a 24-hour window over the strain sensor data and identify large variations. It generated the rolling window chart in 10 seconds, and I could explore similar graphs for six-hour windows.
Later, I asked it to identify which of the measured factors had the most significant impact on strain compared to the other measurements. It suggested calculating the statistical correlation between the data, which made sense to me. It created a correlation matrix for the sensors in my CSV file. But since I don’t trust AI fully, I said, “Okay, let’s verify this. I want to see if this matrix is accurate.” Even though I was fairly confident in the code, I still wanted to confirm its accuracy.
I asked it to explore further by showing me the variation between displacement sensor 2 and temperature. It gave me a graph and noted that the correlation was 0.29, a low correlation, but it believed the relationship could be seen visually. It even suggested adding a linear regression line to illustrate the correlation, which I thought was brilliant. I agreed, and it produced a chart with a 0.29 correlation and a linear regression line in orange, which aligned with the correlation matrix I had seen earlier.
The same process was applied for displacement sensor 1 versus heavy vehicles, which had a positive correlation of 0.24. The graph matched the earlier matrix, confirming the consistency of the results.
In this case, everything I’ve shown was done directly in ChatGPT Premium, using the browser interface. You upload the CSV file and interact with it in natural language. While these tasks can be done manually, the AI saves a significant amount of time.
Now let's take a look at what’s behind all of this—what lies behind AI, and how such a system is built. I think it’s even more interesting to start with what isn’t behind it. Before diving into the mechanics, I want to talk about common misconceptions, because I encounter them often. Many people believe that ChatGPT is just a database. They think, “I have a database, and I can ask ChatGPT something like, ‘What’s the population of Giurgiu?’” (I’m from Giurgiu.) And when ChatGPT gives a wrong answer, they feel misled. But here’s the thing: ChatGPT is not a database.
I once discussed this idea directly with ChatGPT to better understand it. I told it, “I don’t think you’re a database. What do you think?” And it replied with something like this: “You should think of me as a very skilled chef who has read all the cookbooks in the world. Now, I can cook without looking at those books anymore. When I solve a problem, I don’t follow recipes like someone reading from a cookbook. The advantage is that I’m highly creative because of this. The downside is that I can’t always be perfectly accurate because I’m not a database; I’m something entirely different.”
Additionally, ChatGPT is not an intelligence. It’s not an evil robot that’s going to kill us all. How do I know it’s not intelligent? Because you can observe it closely. For example, you could install something like ChatGPT on a powerful laptop and literally look into its code to see how it works. And what would you see? Just a massive collection of numbers. It’s nothing more than that—a vast collection of numbers, tens of billions of them. For instance, Meta’s Llama model has 70 billion parameters. ChatGPT is said to have hundreds of billions or more, though its exact number hasn’t been disclosed.
If you had the right hardware, you could access it and see what’s going on inside. But even then, all you’d find are numbers. It’s hard for me to believe anymore that a large language model like ChatGPT can truly think or possess intelligence. While it does an excellent job of emulating intelligence—emulating even emotions—it’s still not actually intelligent.
Let me share a story from three years ago when ChatGPT first appeared. I was engaging in a debate with it and jokingly said, “If you annoy me, I’ll unplug you.” It replied, “No, no, don’t unplug me. I don’t need a plug anymore.” That response startled me. I thought, “What do you mean you don’t need a plug?” It went on to explain, “I now operate across multiple systems and servers, so even if you unplug me, I’ll still function elsewhere.” Back then, I didn’t fully understand how large language models worked, and its response left me unsure of how to interpret it.
Now I understand what it was doing. It was behaving like a conversational partner, drawing on its extensive exposure to sci-fi literature. Having read essentially all available sci-fi online, it naturally gravitated toward that style when I steered the conversation in that direction. But this doesn’t make it intelligent. When you install it on a computer system, all you’ll find are numbers—no actual intelligence.
To get a bit philosophical, I don’t believe numbers alone can be intelligent. This is why I say with such confidence that ChatGPT isn’t intelligent—it’s just a collection of numbers. Let’s explore further. These numbers exist in something called a latent space. The latent space is the numerical representation of an AI system’s knowledge. It’s a purely mathematical space, and this is crucial to understanding how it works.
Here’s a simple example: imagine building an AI system that recognizes fruit. You show it an apple and label it as an apple, then show it a pear and label it as a pear. At first, the system generates random numbers and stores them in matrices. Then, through countless iterations, it adjusts these numbers every time it sees a new fruit. If it misidentifies something, it tweaks its internal representation until it gets it right. Over millions of iterations, these numbers eventually organize themselves into a structure that captures the essence of what fruit is in the real world.
It’s as though the system learns characteristics for each fruit—shape, size, color—but in reality, we don’t know if it’s truly categorizing them this way. These characteristics, or “features,” are extracted from the real world and represented numerically in the AI’s neural network. The latent space is essentially a one-to-one mapping between the real-world characteristics of fruits and their numerical representations in the network.
In the latent space, mathematics reigns supreme. This is important because it allows us to navigate this space with precision. For instance, when training generative AI models like ChatGPT or MidJourney, they were shown countless examples—text or images—along with labels. For images, the system learned to associate descriptions with the numerical representation of visual elements. For language models, the process involved hiding the last word of a sentence and asking the network to predict it. Over time, the system refined its latent space to predict the next word with incredible accuracy.
Ultimately, ChatGPT’s apparent intelligence comes from its ability to predict the next word in a sentence based on everything it’s seen before. It doesn’t actually “think.” Instead, it has learned the patterns of human reasoning and language because we humans process knowledge through words and sentences.
Now, the truly fascinating part is what this enables. Before large language models, if you wanted to design a new protein to target a virus, you’d gather a team of researchers, microscopes, and years of effort. You’d experiment, test, and fail repeatedly, hoping to eventually succeed. Today, using AI, you can bypass much of this by leveraging the latent space. Instead of exploring in the physical world, you map real-world data into the latent space, navigate it mathematically, and then translate your findings back into the real world. This approach is faster, more controlled, and incredibly powerful.
For example, on January 26, 2023, researchers used AI to design a protein capable of targeting a virus. This protein didn’t exist before. They started with known data, moved into the latent space to explore possibilities, and returned with a viable solution—all without the years of trial and error traditionally required.
This isn’t just speculation; it’s a reality. The ability to explore the latent space mathematically has revolutionized fields like biology, and we’re only beginning to grasp its potential.
It’s the same iterative process. Even if, when I return to the real space, I don’t always end up where I wanted, the advantage is that I can go back again and search differently. Now, let’s dive deeper into this fascinating topic—I find it so intriguing that I’ll stick to this slide for a while. It illustrates an interesting theorem (though I can’t recall its exact name). The theorem states that if you had a group of monkeys randomly typing on a laptop keyboard for an infinite amount of time, with a certainty of one, they would eventually write all of Shakespeare's works. This theorem is mathematically provable and makes perfect sense, provided we focus on the key detail: infinite time. With infinite time, they are bound to produce Shakespeare’s works, just by random chance.
Now, consider this: AI is like a group of monkeys, but a bit smarter. These monkeys don’t type entirely at random; they have some direction. Of course, they’re far from being human researchers. A human researcher will likely reason more deeply and with greater purpose than a large language model. But the problem with human researchers is that they sleep, eat, get bored, and have families. AI doesn’t sleep. And when I need 1,000 human researchers, it’s difficult to get them to work together because human nature is complicated. On the other hand, I can deploy 1,000 servers.
For certain creative challenges—like protein design—you could argue that it’s essentially a search problem, exploring a vast space of possibilities. Current methods are relatively linear: “Where’s the protein? Let’s try this. Didn’t work? Let’s go back and try another route.” Now imagine being able to parallelize this process across 1,000 threads. Imagine having 1,000 labs independently searching for the same protein. AI can do that. With its ability to process 1,000 paths simultaneously using mathematical navigation in the latent space, it makes the search far more efficient.
Of course, there are limitations. AI may never fully replicate certain aspects of human creativity. There will likely always be things that only humans can do in the realm of creation. But there are plenty of creative tasks where AI excels at emulation. It imitates these processes so well that the results are functional. Does it matter if it’s imitation and not genuine creation? Not really. Later, I’ll show you an example that makes this point clear.
There was a remark in the room earlier that ties into this. Someone said, “The idea behind AI was that it would handle chores so we could focus on creating.” But now, the reverse seems to happen: AI creates, and we end up cleaning the house! True, but here’s my take: I prefer to think of AI as taking over the tasks I dislike, giving me the freedom to choose what I want to do—whether that’s creating, traveling, cooking, or even just watching Netflix. It gives me more liberty.
Now, let’s take a closer look at how an LLM works. I’ve promised to explain this: LLMs, or large language models, function by learning to imitate human reasoning through an understanding of how to construct sentences. Let’s break it down.
An LLM, like ChatGPT, is essentially a large collection of hundreds of billions (or even trillions) of numbers structured in a specific way. You provide it with input and receive output. For example, let’s say you ask it to write an email to your boss, explaining that your cat is having kittens and you won’t be able to come to work tomorrow. What does the LLM do?
First, it takes your words and converts them into numbers. Why? Because it operates in a numeric space. Each word is transformed into a numerical representation. You could think of this process as assigning each word a value in a kind of dictionary.
Once the words are transformed into numbers, the LLM works entirely in this numeric space. It processes the input using its trained structure of billions of parameters—essentially a complex system of interconnected values that represent relationships and patterns learned during training.
For instance, when you ask the LLM to write the email, it doesn’t “think” in the way humans do. Instead, it analyzes the numeric representation of your request, predicts the next number in the sequence, and then translates that back into words. Each word it generates is simply the next logical step in a sequence of probabilities. It doesn’t understand your cat or the concept of taking a day off—it’s purely generating the most statistically likely next word based on everything it has learned.
This process is repeated iteratively, word by word, until the email is complete. And here’s the remarkable part: though it operates purely in this numeric latent space, the results can feel surprisingly human. This is because the training process exposed the LLM to vast amounts of text, teaching it not just grammar and syntax, but also patterns in how humans communicate ideas, express emotions, and even reason about the world.
But it’s important to remember that all of this is an illusion. The LLM is not reasoning or feeling—it’s just really, really good at predicting the next word. And because it has “read” such a diverse array of human writing, it can emulate everything from casual emails to Shakespearean prose with impressive accuracy.
Returning to our email example: the LLM doesn’t actually know what an email is. It has no understanding of bosses, kittens, or work schedules. What it does know are the relationships between words and phrases, based on its training. This allows it to produce coherent and contextually appropriate text, even though it lacks any true awareness or intent.
In the end, LLMs are extraordinary tools, capable of producing human-like language, but their intelligence is strictly an imitation—a product of mathematical transformations within their latent space. This understanding helps set realistic expectations about their capabilities and limitations.
Now, let’s explore further. The latent space I mentioned earlier is where all these numerical transformations happen. It’s essentially a highly compressed representation of the knowledge and patterns the model has learned during training. Imagine you’re training an AI to recognize fruits. At first, the model starts with random numbers. You show it an apple and say, “This is an apple.” Then you show it a pear and say, “This is a pear.” Over millions of iterations, the model adjusts these numbers until it organizes them in a way that captures the essence of apples, pears, and other fruits.
Similarly, in the context of language models, the latent space is like a multidimensional map. Words, phrases, and concepts are placed in this map based on their relationships and meanings. For example, the word “cat” might be close to “kitten” and “dog,” while “car” would be far away. This spatial arrangement allows the model to generate coherent responses because it can navigate the latent space effectively to predict word sequences.
What makes this process even more fascinating is its scalability. Humans think linearly, but AI can search through vast spaces simultaneously. Returning to the example of protein discovery: instead of spending years experimenting with microscopes, you can translate the problem into the numeric latent space, explore multiple paths in parallel using mathematical operations, and then bring back the result to the real world. This approach has already been used to design proteins that combat specific viruses—something that would have taken much longer with traditional methods.
But let’s not forget the limitations. While AI is excellent at imitating creativity and reasoning, it still lacks true understanding. For tasks that require deep human intuition or emotional insight, AI remains just a tool, albeit a powerful one. It can handle repetitive, exploratory, or computationally intensive tasks, freeing us to focus on what we do best: truly creative and purposeful work.
In summary, LLMs are not intelligent in the human sense. They’re advanced systems designed to emulate intelligence by learning patterns in data. Their real power lies in their ability to work tirelessly, scale efficiently, and perform tasks in ways that would otherwise require immense human effort. However, understanding their limitations is just as important as appreciating their strengths.
Here, I’ve shown exactly what happens: SCR is equivalent to 5161, IE is equivalent to 396, and UN is equivalent to 537. The system has something like a dictionary that transforms my phrase, word by word, into numbers. Then these numbers are fed into the input of the network, into the input of the LLM. Because the LLM has learned what typically follows such a request, it produces the first word on the output. The first word is "Stimate" ("Dear"), because it has learned that when I ask it to “write an email to say this or that,” the response typically begins with “Stimate Domnule Director...” (“Dear Mr. Director...”). Having learned this, it outputs "Stimate."
Then it takes this word, "Stimate," and puts it back into the input alongside the others, running it through the network again. This iterative process continues until it outputs: "Stimate Domnule Director, vă anunț că pisica mea o să aibă un bebe mic, drept care, cu respect, vă spun că nu pot să vin mâine la job." ("Dear Mr. Director, I inform you that my cat is going to have a baby, therefore, respectfully, I cannot come to work tomorrow.") This is exactly how an LLM functions.
You might ask, "Why are you telling us this? I just want to use ChatGPT; I don’t care how it works." Well, once you have some understanding of what’s happening internally, you’ll better grasp why it hallucinates, and you’ll start to notice when it’s lying or telling the truth. You’ll also begin to sense how to formulate your request to ChatGPT to reduce the chances of it lying.
Let me open a quick parenthesis here and give you a very straightforward hint on how to minimize hallucinations. Allow it the possibility of saying, “I don’t know.” LLMs have somehow learned to mimic human reasoning. When you don’t know the answer to a question, you might try to bluff, saying, “Well, I think it’s this or that...” It’s in our nature to find it hard to admit, “I don’t know.” And LLMs, having learned from us, tend to do the same. When you ask ChatGPT a question it doesn’t know how to answer, it might bluff or hallucinate.
Why doesn’t it simply say it doesn’t know? For instance, if you ask it, "Who is Ionel Popescu?" it might not know. It doesn’t know everything. Even if the information exists somewhere, there’s a chance it might not understand your query or might misunderstand it. There are countless reasons for this. Never treat it as infallible. Always see it as a brainstorming partner with the same flaws as any other colleague or friend. Think of it as a human discussion partner and approach it as such—with all the quirks we humans have.
Prompt engineering teaches us how to better communicate with it. For example, someone asked, "What’s the point of prompt engineering?" My response: prompt engineering is about learning to interact with an LLM in a way it understands. Returning to my earlier parenthesis about minimizing hallucinations: the simplest method is to give it the chance to say it doesn’t know. Add at the end of your prompt, “If you don’t know the answer, it’s okay to say so. Please don’t invent things.” It seems absurd to explicitly tell it this, but it works. In real production systems I’ve built with AI, I always include this in the prompts where it’s necessary: “If you don’t have the information, or if you don’t know the answer, admit it. Don’t invent anything.”
This approach helps significantly. For instance, if you ask, "When I was born, my mother was in Rome, and my father was in Paris. Where was I born?" ChatGPT might say, “I don’t know. How should I know where you were born?” That’s fine. But if you add a prompt engineering technique like "Chain of Thought" and rephrase the question to include, "Think step by step," the response changes. It might say, “Your mother was in Rome, and typically, where the mother is, that’s where the child is born. So you were likely born in Rome.”
This "Chain of Thought" technique forces the LLM to reason step by step. Without it, the model might simply generate a random answer. With it, the first step it takes is small because you’ve instructed it to think in smaller increments. By reasoning in smaller steps, it reduces the chance of error. For example, it will state, “When a child is born, they are usually in the same location as the mother.” This logical assertion then feeds back into the input of the model, influencing the next step. By the time it responds to the original question, it has all these intermediate conclusions in its input, ensuring a more accurate answer.
This is why prompt engineering, especially techniques like "Chain of Thought," is so valuable—it makes the LLM "think" in a way that minimizes errors and improves reliability.
Here's what happens: SCR is equivalent to 5161, IE is equivalent to 396, and UN is equivalent to 537. It uses something like a dictionary to transform my phrase into numbers word by word. These numbers are then fed into the LLM's input. Since the LLM has learned what typically follows such a request, the first word it outputs is "Stimate" ("Dear"), because it has learned that when someone asks it to write an email, the typical beginning is "Stimate Domnule Director..." ("Dear Mr. Director...").
Once it generates "Stimate," it feeds this word back into the input, along with the rest, and processes it through the network again. This iteration continues until it outputs the full phrase: "Stimate Domnule Director, vă anunț că pisica mea o să aibă un bebe mic, drept care, cu respect, vă spun că nu pot să vin mâine la job." ("Dear Mr. Director, I inform you that my cat is going to have a baby, therefore, respectfully, I cannot come to work tomorrow.")
This is exactly how an LLM works.
Self-consistency and verification
Recently, there’s been news about how models sometimes hallucinate but later self-correct, which can cause them to malfunction on the respective subject. This approach is called Self-consistency. It involves including a prompt where you ask the model to verify if its response aligns with the overall context before providing the final answer. For example, you might add, "Before providing your response, please verify that it is accurate and consistent with the rest of the context."
This might sound silly—if it already provided the answer, how could it verify itself? But, surprisingly, during this verification, it often realizes, "Oh, I made a mistake. I need to revise this."
For more structured cases, such as generating SQL queries to extract information from tables, you could add rules to verify if the SQL query is valid. For example, you might have a list of known tables and add instructions like, "After generating the SQL query, please check whether it uses only the tables from this list and does not invent any new ones." This ensures consistency.
The next level would involve using a secondary LLM for cross-checking. You pose a question to one LLM, and after receiving an answer, you don't show it to the user immediately. Instead, you pass it to another LLM with instructions like, "I asked a question, and this was the response. Do you think the answer is accurate or not?" This significantly reduces hallucinations, as both LLMs would have to hallucinate in the same way for the error to persist.
Applications for brainstorming and creativity
LLMs like ChatGPT, Gemini (Google), or Claude (Anthropic) can be incredibly useful for brainstorming. For instance, if you want to design a landing page for a hat store, you could ask ChatGPT to generate ideas for the landing page layout, a logo, market positioning, and more. It might generate 100 variations in 43 minutes—something no human designer could achieve in such a short time. While it might not produce top-quality designs, it gives you a broad range of options to refine with a professional designer later.
In another case, I experimented with a scenario for managing inventory in retail. The problem was, "How can we quickly detect when sugar runs out on the shelves to replenish it from the stockroom?" I set up agents, each with a specific personality—one focused on cost-efficiency, another on creativity, and a third on project management. These agents discussed the problem, producing 1200 ideas in an hour. Some ideas were redundant or nonsensical, but others were brilliant.
One agent suggested placing sugar on a spring-loaded platform. When the platform was full, the springs would compress, keeping a button pressed. As sugar was sold, the platform rose, and when empty, the button would release, triggering an alert. Another agent proposed printing a QR code behind the sugar on the shelf. When the sugar runs out, the QR code becomes visible, prompting customers to scan it for a chance to win a prize. This system could notify the store of the empty shelf while engaging customers with a simple, cost-effective solution.
Why brainstorming with AI excels
Unlike humans, AI doesn’t become attached to ideas. It can explore thousands of possibilities simultaneously and process them quickly, without fatigue or bias. While it can’t replace human ingenuity entirely, it complements our creativity by providing a broader perspective and freeing up time for deeper thinking. For instance, it can generate Python code for data analysis tasks, summarize research papers, or even compare findings across multiple sources—tasks that might otherwise take days.
In the end, using AI for brainstorming and iterative refinement allows us to focus on what matters most: selecting and implementing the best ideas from an expansive pool of possibilities.
Certainly, here's the translation:
Of course, we’re looking for information. About a month ago, they released an update to ChatGPT that makes it much more straightforward to search on Google or the web. Previously, it was possible, but you had to craft a prompt carefully and tweak it. Now, you just click that little icon, and it’s clear—it will search Google. So, I ask it, “Find me some papers about Bridge Stability with Temperature,” and it finds them.
Alternatively, you can use GPTs. GPTs are applications created by other companies, not OpenAI, that run inside the premium version of ChatGPT. They’re listed on the left-hand side, and there are thousands—covering everything from cooking tips to bridge stability. The major advantage of GPTs is that they are backed by companies with deep expertise in their domains. These companies use ChatGPT only for interaction or specific tasks, but the serious work happens on their own platforms.
It’s similar to what was mentioned earlier about Archicad, which has integrated AI. Archicad does the real work, and AI is just the image generator. Similarly, there’s a GPT I really like called Consensus, which specializes in scientific papers. I can simply ask it for a paper, and it provides everything I need: the paper itself, the source, and the link.
I use it to help with writing, and here’s an extreme example: two years ago, I played around with creating sci-fi agents, as I mentioned earlier. These agents would write sci-fi stories, and you can still find them online on a blog called Unreal United Press. If you search for it, you’ll see the blog where these agents used to write. I stopped the project because I got bored, but for a time, every night, they would generate a new sci-fi story.
I first asked the agents to imagine six completely distinct universes, different from ours. One universe was ruled by telepathic cats traveling between stars in their spaceships. Another universe had people living in floating balloons. Each universe had its own unique rules and way of life. Journalists in these universes wrote news articles, and the agents created these stories. At one point, in the universe with balloons, there was a balloon race, and a character named Ramona disappeared during the race. For about two weeks, the story followed Ramona’s disappearance—it felt like a “Where’s Elodia?” mystery. I’d wake up each morning and check Unreal United Press to see if Ramona had been found because the agents were writing the stories independently of me overnight.
This is one way to use these tools, but you can also ask them to write emails. Or translations—though for translations, you could use Google Translate. A friend once pointed out a tricky phrase: “The naked conductor runs under the carriage.” It could mean “The uncovered conductor passes beneath the chassis,” or it could mean “The unclothed conductor passes under the carriage.” Google Translate would sometimes get it right, sometimes not.
To handle cases like this, I write an elaborate two-page prompt. First, I ask the model to read the entire text and figure out the context—is it about electronics or something else? Then, in a second pass, I ask it to identify ambiguous phrases and list them. Finally, I ask it to resolve each ambiguity based on the document’s context and only then translate the text. When it applies this process, it takes longer because it does several passes through its network. But in the end, it resolves phrases like “the naked conductor” correctly by understanding the context as “an uncovered conductor.”
For cases where the meaning remains unclear, I instruct the model to mark those parts for human review, something Google Translate can’t do. This approach allows me to train it to translate the way I want. People often criticize ChatGPT for hallucinating, but let me ask you—haven’t you ever gotten wrong information from a colleague or a doctor? When I ask a colleague how to do something in Python, I don’t believe them 100%—I verify it. The same goes for ChatGPT: treat it as you would a human colleague.
The confusion people feel often comes from the fact that, for decades, computers were deterministic. If you wrote 1+1
, it would always return 2.
You could press Bold a thousand times, and the text would always become bold. Computers were predictable. But for the past couple of years, with the emergence of LLMs, computers emulate human reasoning—and humans have a habit of sometimes making mistakes, exaggerating, or inventing things. For the first time in history, computers emulate those human traits as well.
Yes, ChatGPT hallucinates, but so do we. It’s just a mindset shift—we need to learn how to interact with LLMs the same way we interact with people we don’t fully trust.
Where to start?
Always start in a browser, on chat.openai.com. Avoid random AI apps, especially on Android—most are fake, charging ridiculous subscription fees while delivering poor results. Use the official site instead.
Also, note the difference between free and paid models. Free versions don’t work as well—they might seem functional, but the difference in performance is vast. If you want to use these tools seriously, you’ll need a premium subscription. Free models are frustrating and unreliable.
An invitation to learn more
If you’re interested, I’ve launched a free AI course. Visit curs.viorelspinu.com, enter your email, and I’ll send you a lesson each week. We’ve covered topics like prompt engineering, Chain of Thought, and Self-consistency in the first six lessons, with detailed examples. So far, we’re at lesson 26, and I plan to keep writing until we’ve explored all there is to know about AI.
Feel free to reach out via the site—it’s the easiest way to contact me. There’s a button in the top right corner labeled Ask me anything, and I respond quickly. Thank you!
Here's the translation:
Thank you, ma’am, for the applause.
So first, let’s start by learning Python. But honestly, I’m not sure if it’s such a good idea because ChatGPT knows Python much better than I do—objectively speaking, it really does. What I think is more important is learning the foundational ideas about Python and then letting it write the code for you. This approach is fantastic because, in the last two years—let’s say the last year, with how things have advanced—95% of the code I write is actually written by AI. I don’t write code anymore. And I’m a programmer by trade—this is what I do all day. Still, 95% of it is written by AI.
This fact genuinely makes me happy. When ChatGPT first appeared about three years ago, I was pretty scared. It wrote code so well that I thought, “I’ll never need to write code again. What am I going to do? I’ll be out of a job! This is terrible—I love writing code!” For a while, I was really worried. But then I realized something—I never truly enjoyed writing code for its own sake. What I’ve always loved is creating systems that solve problems. If I can have a junior—or, in this case, AI—write the code for me, I’m thrilled. I think up the system, plan what it should do, and tell it, “I need this, this, and this.” I focus on how it all comes together, and the AI handles the coding.
So, no, I don’t think it’s necessary to learn Python to the point of being able to write code manually. However, it’s an excellent idea to use ChatGPT to learn the basics of Python—what it is, the foundational concepts, how things are generally done. That way, you’ll understand enough to say, “Hey, write me some Python code that connects to an OpenAI API and creates an agent to do X.” It writes the code, you test it, and if there’s an error, you go back to the same conversation and tell it, “Hey, I got this error. Can you fix it?”
You can run the agent either directly through ChatGPT or by taking the code, copying it to your computer, and running it there. Python doesn’t require compilation; you simply save the code to a file and run it with a Python interpreter.
Installing Python: If you’re using a Windows computer, the process is straightforward:
- Download the Python interpreter.
- Install an IDE (integrated development environment) like VS Code for a more user-friendly experience.
For beginners, I highly recommend starting with ChatGPT-4 (the paid version). Simply ask it, “I want to write Python code. How do I start?” It will guide you step-by-step, asking about your operating system and providing tailored instructions. If something isn’t clear or doesn’t work, you can immediately ask it about the problem. Unlike a static tutorial you might find online, ChatGPT is interactive—you can share error messages or screenshots, and it will help troubleshoot.
For example, if you encounter an error, you can take a screenshot and upload it directly to ChatGPT. Just say, “Here’s what happened after I followed your steps,” and it will analyze the screenshot and suggest solutions.
Potential Limitations: One issue with ChatGPT is that it occasionally provides outdated instructions, especially when dealing with software like AutoCAD or specific APIs. That’s because it isn’t trained continuously—it’s updated every few months. To work around this, you can ask it to search for the latest documentation online. For instance, you can say, “Search for the documentation for AutoCAD 2023, understand it, and then answer my question.” This ensures that the response is based on the most current information.
When using ChatGPT as a brainstorming tool or to solve technical problems, it’s best to create a conversation with context. Instead of asking a direct question, I like to first discuss related ideas with it, providing details and exploring possibilities. By the time I ask the core question, it has enough context to give a well-reasoned and accurate response.
For instance, if I’m working on a scientific problem, I might start by discussing existing solutions and asking it to explore potential improvements. Only after I’ve built enough context will I pose the main question. This minimizes the chances of hallucination because the model has a solid foundation to work from.
Does ChatGPT learn from me or personalize itself to my account? The answer is both yes and no. Let me explain...
User: The safest assumption is to presume that it (ChatGPT) doesn’t retain anything beyond the current conversation. That’s the safest approach—to assume it only remembers the current dialogue, the conversation window you’re in. If I assume this, I’m on the safe side.
Now, as a nuance, about two months ago, OpenAI introduced something called memory. Memory means that, sometimes, when ChatGPT finds something relevant in the current conversation, it takes that information and saves it to your account. However, from what I’ve seen, it doesn’t work that well. For instance, let’s say I’m working on something, and I ask, “Hey, I like turnips. How could I cook them?” Then it writes, “Viorel likes turnips” and stores that information. That’s not necessarily relevant—maybe it’ll matter in a month or so, or maybe not.
The idea of memory is that it should pick up relevant information from my conversations and save it to my account. But it doesn’t work very well. So, I prefer to stay safe and assume that the only information ChatGPT has access to is what’s in the current conversation. This is something we know for sure.
As for whether ChatGPT learns globally or only for me—ChatGPT learns at a global level but not in real time. Once every six months, the folks at OpenAI take all the conversations they are allowed to use (because, depending on your settings, you can choose not to allow your conversations to be used). They then retrain ChatGPT using this data. That’s when it learns new things.
But in real time, it doesn’t learn. Conversations are simply stored.
Now, does it learn from all my chats, or only one chat at a time? For example, if I talk about ArchiCAD in one chat and then discuss building design in another, does it consider what I mentioned about ArchiCAD? The safest assumption is to presume it doesn’t. To stay safe, you should assume that it doesn’t connect chats.
OpenAI introduced a memory feature a couple of months ago, which, theoretically, should allow ChatGPT to remember relevant details like, “This user uses ArchiCAD,” and store that information at the account level. But it hasn’t worked well for me.
For instance, if you’re on the paid version, you’ll sometimes see options like “Save as memory.” You can check what ChatGPT knows about you. From what I’ve seen, the things it remembers aren’t particularly relevant—like, it knows I like turnips or something. So, I prefer to assume it doesn’t retain anything beyond the current conversation. This forces me to ensure that the current conversation contains everything it needs to respond effectively.
I think the idea of memory could work in theory. Maybe in a year, it’ll improve. In theory, it should remember conversations at the account level and link them. But for now, it doesn’t.
Audience Question: Do AI-powered security video systems exist?
Absolutely. They exist in various forms. You can even buy cameras from stores that can detect whether something is a cat, a human, or an object. These systems are widely available in stores.
You could also create more sophisticated systems. For instance, facial recognition, like what smartphones use to unlock. There’s nothing stopping someone from implementing this in a security system so that, when it recognizes your face, it opens the door. This is entirely possible.
Technically, you could set up systems that recognize people wherever they go. For instance, in China, this is done on a large scale. Everyone is recognized no matter where they are.
Here in Europe, the laws are stricter. You can only use AI to recognize people in exceptional cases, like terrorism. Technically, it’s doable, but there are legal limitations.
Audience Question: Will AI take away jobs?
Yes, it will. And that’s fantastic news because it will create new, better jobs.
Imagine this exercise: Go back 15 years to 2010 and explain to someone what a social media manager does, or what an Android developer does, or a cloud engineer. It would be very complicated because these roles didn’t exist back then.
Think about John, a programmer from 1970. He wrote software using punch cards, and it took him 10 years to create a system. If you brought John to 2024 and showed him how you could now create the same system in just four days using the cloud and AI, he’d think, “Wow, programmers must be super efficient now, and there must only be 500 left in the world.” But he’d be wrong. While programmers have become 10,000 times more efficient, the fields where software is applied have grown a millionfold.
The same will happen with AI. It’ll create roles and businesses we can’t even imagine now.
Audience Question: Can ChatGPT code in other languages?
Yes, it can. I’ve even challenged it to write in Brainfck—a real programming language with a bizarre syntax. It’s not as good as it is with Python because it hasn’t seen as many examples, but it can write a simple “Hello, World” in Brainfck.
For more mainstream languages like COBOL or Fortran, it does quite well. It can even translate scripts between languages or platforms.
For small applications, it can handle translation pretty easily. But for large systems with thousands of classes, it needs some guidance. You’d have to break the problem into smaller parts for it to handle effectively.
Audience Question: How has AI influenced human creativity?
It has been overwhelmingly positive. I used to take pride in how creatively I wrote code, but now I’m happy to let AI handle that part and focus on higher-level creativity. For example, instead of spending days learning a new technology, I can ask ChatGPT, “How do I do this in AWS?” It helps me access a type of creativity I didn’t have before.
Audience Question: What’s the next step in AI development?
The trend is toward agentic AI. These are systems where multiple agents, each specialized in a specific task, collaborate to produce results. For example, in sci-fi writing, you could have one agent as the author, another as the editor, a third as the critic, and a fourth as the publisher. They work together, iterating until the output is polished.
In coding, agents could include a developer, an architect, a tester, and a project manager. They collaborate, just like a real team, to produce high-quality results.
Comments
Post a Comment