GitHub Copilot Best Practices¶
Prompt 1: Writing Effective Comments for GitHub Copilot¶
Use Case: To help developers write comments that maximize GitHub Copilot's code generation capabilities.
Write comments that:
1. Clearly state the function's purpose
2. Define expected inputs and outputs
3. Mention edge cases
4. Include relevant algorithm details
5. Use consistent formatting
Prompt 2: Structuring Code for Better GitHub Copilot Suggestions¶
Use Case: To help developers structure their code in ways that get better suggestions from GitHub Copilot.
Tips for better GitHub Copilot suggestions:
1. Start with clear variable and function names that use snake_case
2. Add detailed JSDoc/docstring comments before functions
3. Break complex logic into smaller, well-named functions
4. Provide a few example implementations first, then let Copilot learn the pattern
5. Include type information (TypeScript, JSDoc, Python type hints)
6. Structure consistent code patterns across your codebase
Prompt 3: Getting Specific Implementations from GitHub Copilot¶
Use Case: To help developers get specifically tailored implementations from GitHub Copilot.
To get specific implementations:
1. Be explicit about design patterns and approaches
2. Specify language features to use or avoid
3. Mention performance considerations
4. Include expected input/output examples
5. Reference specific libraries or frameworks
Prompt 4: Prompting GitHub Copilot for Code Improvements¶
Use Case: To help developers use GitHub Copilot to improve existing code.
Ask Copilot to improve your code by:
1. Specifying what aspects to improve (performance, readability, etc.)
2. Mentioning specific techniques to apply
3. Requesting before/after explanations
4. Including constraints to maintain
5. Explaining the expected outcomes of the improvements
Example:
```javascript
/**
* TODO: Refactor this function with GitHub Copilot to:
* - Improve performance (current O(n²) complexity)
* - Add proper error handling
* - Use modern JavaScript features
* - Implement pagination support
* - Maintain the same function signature
*
* @param {Array} items - The items to filter and process
* @param {Object} options - Processing options
* @returns {Array} Processed results
*/
Prompt 5: Working with GitHub Copilot for Test Generation¶
Use Case: To help developers effectively use GitHub Copilot to generate comprehensive tests.
Tips for getting better test suggestions:
1. Clearly document the function to be tested first
2. Specify testing framework and style
3. Start with a few example test cases
4. Include edge cases in your comments
5. Request specific test coverage targets
Example:
```javascript
/**
* Write unit tests for the validateRegistrationInput function using Jest
*
* Test cases should include:
* 1. Valid input validation
* 2. Invalid username formats (too short, too long, invalid chars)
* 3. Invalid email formats
* 4. Password requirement failures
* 5. Password mismatch
* 6. Missing required fields
* 7. Incorrect input types
*
* Each test should include an assertion and appropriate error message
*/
Prompt 6: Code Generation Patterns¶
Use Case: To help developers efficiently generate code using GitHub Copilot.
Tips for generating specific types of code:
1. Data Structure Generation
```typescript
/**
* Define a data structure for:
* Type: User Profile
* Requirements:
* - Unique identifier
* - Personal information
* - Contact details
* - Account settings
* Include: Type validation, default values, optional fields
*/
-
Algorithm Implementation
/** * Implement algorithm for: * Task: Sort a list of objects by multiple criteria * Input: Array of product objects with: name, price, rating * Output: Sorted array by primary and secondary keys * Consider: Performance, memory usage, readability */ function sort_products(products: Product[], sort_keys: SortKey[]): Product[] { // Copilot will suggest the implementation }
-
API Integration
/** * Create API integration for: * Service: External payment processor * Operations needed: * - Process payment * - Refund transaction * - Check status * Include: Error handling, retry logic, timeout handling */ class PaymentService { // Copilot will suggest the implementation }
-
Unit Test Generation
/** * Generate unit tests for: * Function: validateUserInput(data: UserInput): ValidationResult * Test cases needed: * - Valid input data * - Missing required fields * - Invalid field formats * - Edge cases */ describe('validateUserInput', () => { // Copilot will suggest test cases });
Best Practices for Code Generation¶
-
Clear Intent Communication
Specify in comments: - Purpose of the code - Input/output formats - Important constraints - Performance requirements
-
Incremental Generation
// 1. Start with interface/type definitions interface DataProcessor { // Core functionality } // 2. Add basic implementation class BasicDataProcessor implements DataProcessor { // Basic methods } // 3. Enhance with additional features class AdvancedDataProcessor extends BasicDataProcessor { // Advanced features }
-
Pattern Guidance
/** * Guide the generation by specifying: * 1. Design pattern to use * 2. Error handling approach * 3. Logging requirements * 4. Performance constraints */ class FactoryExample { // Factory pattern implementation... }
Common Generation Scenarios¶
-
CRUD Operations
/** * Generate CRUD operations for: * Entity: Product * Storage: SQL Database * Features: * - Validation * - Error handling * - Logging * - Transactions */ class ProductRepository { // Copilot will suggest implementation }
-
Middleware Functions
/** * Create middleware for: * Purpose: Authentication and Authorization * Requirements: * - Token validation * - Role checking * - Rate limiting * - Error handling */ function auth_middleware(config: AuthConfig) { // Copilot will suggest implementation }
-
Utility Functions
/** * Generate utility functions for: * Category: Date manipulation * Functions needed: * - Format dates * - Calculate durations * - Handle timezones * - Parse date strings */ class DateUtils { // Copilot will suggest implementation }
Generation Anti-patterns¶
- Code Structure
- Avoid overly complex single prompts
- Don't mix multiple concerns
- Skip unnecessary details
-
Keep context focused
-
Implementation Details
- Don't over-specify implementation
- Allow flexibility in approach
- Focus on requirements
- Guide but don't constrain
Feedback Loop¶
-
Review and Refine
After generation: 1. Review for correctness 2. Check edge cases 3. Verify error handling 4. Test performance
-
Iterative Improvement
For better results: 1. Adjust prompts based on output 2. Add missing context 3. Clarify requirements 4. Provide examples