Learn How To Master Test Writing with GitHub Copilot by Michelle Duke | Testμ 2024

Join Michelle Duke, and discover how GitHub Copilot can transform your testing process. This session will introduce you to GitHub Copilot’s capabilities in generating test code from natural language descriptions. Learn best practices for prompt engineering and see how Copilot can assist in writing and automating tests effectively.

Still not registered? Register Now!

Already registered? Share your questions in the thread below :point_down:

Hi there,

If you couldn’t catch the session live, don’t worry! You can watch the recording here:

Additionally, we’ve got you covered with a detailed session blog:

Here are some of the Q&As from this session:

When would you NOT suggest copilot as to give us a better idea where & when we can freely use it without much consequence?

Michelle Duke: Avoid using Copilot for highly sensitive or confidential code, or in scenarios where regulatory compliance and data security are paramount. Also, be cautious when working on complex algorithms or domain-specific logic where Copilot’s suggestions might not align with best practices or specific requirements.

Is there a command for descriptive method/variable/etc naming, for those of us anti-comment folks who prefer the code to describe itself where possible?

Michelle Duke: Copilot doesn’t have a specific command for generating descriptive names, but you can guide it by using clear, descriptive variable names and method names in your prompts. The more context you provide in your code, the better Copilot can suggest meaningful names.

Can you give examples of effective prompt engineering techniques for generating high-quality test cases?

What role does GitHub Copilot play in the future of test automation, and how should testers adapt to its evolving capabilities?

What are the limitations of GitHub Copilot in test writing, and how should testers handle them?

What’s the learning curve for building competent or even advanced test script-writing on GitHub Copilot?

Who is going to review the code generated by github copilot?

Is it feasible to utilize GitHub Copilot for the creation of intelligent locators to be utilized in UI test frameworks like Playwright?

Does copilot write tests without having a read me that explains the application?

How can developers balance using GitHub Copilot with maintaining good testing practices and code quality?

Could you give some examples of what context from my development workflow GitHub Copilot can access? For example, how could I use it to fix errors from a GitHub Actions job?

Does Copilot understand what you’re testing?

Will github copilot help to test the api using postman if its also integrated with vs code?

What about a Jr dev who doesnt know what github Copilot is generating? How will he determine the code quality?

Does copilot support BDD and create feature files?

Can GitHub Copilot help train or sandbox users on how to get the hang of (or improve upon their) test writing?

Effective prompt engineering involves being specific and clear with your inputs. For example, if you’re using AI tools like GitHub Copilot to generate test cases, provide context-rich inputs like:

  • “Generate edge case tests for this login function.”
  • “Write unit tests for input validation in the checkout process.” The more precise and contextual the prompt, the better the AI can generate relevant test cases.

GitHub Copilot is set to play a major role in assisting with code and test script generation. Testers should adapt by learning how to guide Copilot with the right prompts and thoroughly reviewing its output. Copilot can help in speeding up test creation, but it’s crucial to understand its limitations and maintain manual oversight to ensure test accuracy and completeness.