When using GenerativeWTiN, it’s important to use good prompts to give the system the best chance of returning a high-quality response. Remember, GenerativeWTiN is fed with textile-specific data, giving it a huge breadth of in-depth technical content to build a response, but, unfortunately, it can’t tell you the latest weather where you are!
Provide Context: GenerativeWTiN will take a simple question, like “What’s the latest innovation in smart textiles?”. However, providing some context can also make your prompts more sophisticated. Try, for example, “I am an experienced textile research and development scientist specialising in conductive materials. I am creating a report for some less experienced colleagues; please tell me some of the latest innovations in smart textiles and explain it simply for my audience.” The response you get from the latter will be vastly different because it considers the parameters you established by adding more context.
Be Specific: Boost specificity in your prompt. Try adding a year or specific region. AI models often generate outputs based on the clarity and precision of the input queries they receive. Rather than posing a general question like, “Tell me about textile innovations”, consider detailing the aspects you’re interested in, for example, “Can you give me an example of how biomimicry is being used in textiles?” By doing so, you direct the model’s focus, thereby obtaining a more targeted and relevant response. The granularity of your input is directly proportional to the utility of the output.
Build on the Conversation: GenerativeWTiN takes the form of a chat window. These chat-based systems can remember what happened earlier in your conversation without re-establishing context. Taking the “Can you give me an example of how biomimicry is being used in textiles?” example, you may want to follow up the response with a further prompt like “Please add some more examples and format the response as a list” to shape the response further.