Bloom Taxonomy Description
Prompt
Give the course description, apply the Bloom Taxonomy to rewrite the course description into a set of precise levels.
Leverage Bloom’s Taxonomy to structure the course, detailing the knowledge, skills, and applications students will develop at each cognitive level (remember, understand, apply, analyze, evaluate, and create).
In the description, highlight how generative AI will serve as a primary teaching tool in the following ways:
Metaphors and Analogies: Explain how generative AI will dynamically create metaphors that help students understand complex circuits and system concepts intuitively, for example, comparing [XXX] to [YYY] different contexts.
Stories and Lessons: Illustrate how AI-generated narratives will guide students through historical milestones in circuit design, with characters like pioneering engineers or scientists to add context and enhance engagement.
Content Generation: Describe how AI will develop interactive content—such as real-time quizzes, customized practice problems, and detailed walkthroughs of circuit diagrams—that align with students' progress and areas of improvement.
Simulations and Animations: Outline how AI-powered simulations will allow students to interact with virtual circuit models, adjusting parameters and observing outcomes in real-time, and how animations will visually represent phenomena like current flow and voltage changes.
Learning Activities and Projects: Explain how generative AI will generate scaffolded project suggestions based on Bloom’s higher-order objectives, like creating their own circuit designs or troubleshooting complex systems. AI should provide formative feedback throughout each step to foster critical thinking.
Assessment and Feedback: Describe how AI-driven assessments will go beyond traditional grading, providing personalized feedback loops that prompt self-reflection, track learning progress, and guide students through iterative problem-solving exercises.
The course description should aim to inspire curiosity and convey how AI enhances interactivity and personalizes the learning journey, supporting students in achieving a deep and practical understanding of circuits and systems.
Give the course description, apply the Bloom Taxonomy to rewrite the course description into a set of precise levels.
Course Description
Organization: University of Minnesota Department of Electrical Engineering and Computer Design (EECD) Course Identifier: EE 4940 Course Title: : Introduction to Deep Learning Offering Date: Spring 2025
Course Instructor
Sang-Hyun Oh, Professor of ECE, graduate faculty in neuroscience, physics, biomedical eng. Email: sang@umn.edu Office: 5-119 Keller Hall Teaching Assistant: Ike Goetzman (PhD candidate in ECE; email: goetz219@umn.edu) Lecture (Jan 22 – May 5, 2024): Monday/Wednesday 11:15am-12:30pm @ Appleby Hall 3
Course Summary
Machine learning, especially deep learning based on neural networks, has shown transformative impacts across life sciences, engineering, and medicine, as recently highlighted by the 2024 Nobel Prizes:
- The Physics Nobel was awarded to John Hopfield and Geoffrey Hinton for pioneering contributions to neural networks that enable machine learning
- Demis Hassabis, John Jumper, and David Baker received the Chemistry Nobel for advancing computational protein design.
Course Overview
EE 4940: Introduction to Deep Learning aims to provide students with both fundamental and applied perspectives on these advances. Beginning with an overview of AI, the course introduces deep neural networks (DNNs) through practical, hands-on examples and guides students through specialized neural network architectures, including
- convolutional neural networks (CNNs) for diagnostic applications and
- recurrent neural networks (RNNs) for time series analysis.
Emphasis is placed on applying neural networks to tackle real-world problems in classification, regression, image analysis, and time-series prediction. Scientific applications are demonstrated as well, such as using DNNs to solve equations.
The course concludes with dimensionality reduction techniques, followed by advanced topics such as generative AI, and an introduction to reinforcement learning.
Emphasis will be on practical experience with Python-based coding over mathematical theory.
By the end of the course, students will have gained familiarity with Python-driven deep learning, hands-on skills, and a working knowledge of deep learning’s potential for a wide range of problems.
Course Prerequisites
EE 4940 - Special Topics in Electrical and Computer Engineering is recommended but not required.
This course is designed to be accessible and introductory, with essential concepts and coding skills introduced as needed.
Expected Workload
This course (EE 4940) is a 3-credit, lecture format course. The expected workload for this course this semester is, on average, 45 hours per credit, or just over 3 hours per credit hour per week (including lecture). The total workload of 9 hours per week.
Course Goals and Objectives
EE 4940 aims to equip students with a foundation in deep learning, focusing on core architectures like deep neural networks (DNNs), convolutional neural networks (CNNs), and recurrent neural networks (RNNs).
Through hands-on applications, students will learn to apply these models to solve practical problems using real-world datasets, such as cancer diagnostics and time-series analysis. The primary goal of the course is to develop students's ability to implement and troubleshoot neural network models, gaining some practical experiences, and be aware of potential pitfalls/artifacts while using DNNs.
By the course's end, students will have a working knowledge of AI’s role in scientific problem-solving, dimensionality reduction techniques, and a simple introduction to generative models and reinforcement learning (RL).
Topics To Be Covered
1. Introduction to AI and deep learning
- Brief history of AI, neural networks and recent Nobel-winning breakthroughs in physics and chemistry
- Course setup, tools, and brief introduction to Python for deep learning
- The universality theorem and the manifold hypothesis
2. Fundamentals of Deep Neural Networks (DNNs)
- Architecture of DNNs: layers, activation functions, and backpropagation
- Training, validation, and testing sets;
- Loss functions, gradient descent, mini-batches
- Hands-on coding
3. Convolutional Neural Networks (CNNs) - applications in diagnostic imaging
- CNN architecture
- Applying CNNs, with MHIST as a case study for cancer diagnosis
4. Recurrent Neural Networks (RNNs) and time series analysis
- Introduction to RNNs, LSTMs, for sequency data analysis.
- Hands-on practice with real-world data.
5. Chips and AI
Briefly explore AI chips, neuromorphic devices, and future directions.
6. AI for scientific problem solving: Using neural networks to approximate solutions for
Maxwell’s equations
7. Dimensionality reduction and autoencoders
8. Advanced topics
Course Format
- Classroom lectures only
- Canvas LMS link will be provided
- Textbook and Required Materials: none
- Based on lecture notes (uploaded in canvas 1-2 days before class)
Homework Assignments and “Mini-projects”:
- Regularly-scheduled weekly or biweekly homework assignments and 4 multi-week “mini-projects.” The former will be designed to exercise basic concepts and modest extensions of the technical or computational tools introduced in class; the latter will be crafted to allow students to take a “deeper dive” by applying techniques developed in the course to data sets arising in various aspects of ECE and life sciences.
- You are allowed (in fact, encouraged) to work with others when solving the homework and mini project problems. However, solutions submitted for grading should be formulated by each student independently! Late homework and mini-project submissions will be accepted, but without a legitimate (e.g., illness-based) justification, will result in a maximum grade of 80% for that assignment. Specifically, unjustified late assignments will be graded according to the same rubric as on-time assignments, but the corresponding score will be multiplied by 0.8 when entered into the gradebook.
Exams
There will be one midterm exam and one final exam. The midterm exam will be open book/notes. The final exam will be open book/notes and will be due no earlier than the final exam time specified by the Registrar. Either or both may be take-home exams.
Grading
The grade distributions for the course variations are provided below.
Note: distributions for homework assignments and mini-projects are total; individual submissions will be equally weighted.
Deliverable | Weight |
---|---|
Homework Assignments | 40% |
Mini-projects | 40% |
Midterm Exam | 10% |
Final Exam | 10% |
Software Requirements
Through the homework assignments and mini-projects, students will be required to perform a number of computer exercises to become familiar with the computational and practical aspects of deep learning. This will typically entail utilization of Python, often in concert with existing toolboxes such as NumPy, SciPy, Matplotlib, Scikit-learn, and PyTorch (or TensorFlow), all of which are freely available for Windows, Mac, and Linux operating systems.
As a perhaps preferred alternative, students may wish to perform all (Python) programming computations using a free cloud-based account (i.e., Google Colab), which will be discussed and established at the beginning of the semester. Large-scale deep learning computations necessary for the final “mini-project” will utilize these resources.
Note
The Required University Policy Statements were provided in the original PDF of the course description. They have been removed to allow the generative AI tools to focus on the concept enumeration.
Result
Chat GPT o1 Thought about deep learning course structure for 42 seconds
Below is a revised course description for EE 4940: Introduction to Deep Learning structured according to Bloom's Taxonomy. Each level highlights the knowledge, skills, and applications students will develop, as well as how generative AI will serve as a primary teaching tool---through metaphor/analogy generation, narrative stories, adaptive content creation, real-time simulations, scaffolded projects, and personalized assessment.
Course Information
- Organization: University of Minnesota Department of Electrical Engineering and Computer Design (EECD)
- Course Identifier: EE 4940
- Course Title: Introduction to Deep Learning
- Offering Date: Spring 2025
Instructor and Teaching Assistant
- Instructor: Sang-Hyun Oh, Professor of ECE (graduate faculty in neuroscience, physics, biomedical engineering) Email: sang@umn.edu Office: 5-119 Keller Hall
- Teaching Assistant: Ike Goetzman (PhD candidate in ECE) Email: goetz219@umn.edu
- Lecture Schedule (Jan 22 -- May 5, 2024): Monday/Wednesday 11:15am-12:30pm @ Appleby Hall 3
Course Summary
Machine learning---particularly deep learning---is transforming fields from life sciences to engineering and medicine. Inspired by recent Nobel Prize recognitions (e.g., John Hopfield and Geoffrey Hinton in Physics for neural network breakthroughs, Demis Hassabis, John Jumper, and David Baker in Chemistry for computational protein design), this course provides an introductory yet hands-on exploration of deep neural networks (DNNs), covering CNNs for image analysis, RNNs for time-series data, dimensionality reduction, an introduction to generative AI, and a glimpse of reinforcement learning.
Students will gain practical coding experience in Python with toolkits such as PyTorch or TensorFlow, applying deep learning techniques to real-world datasets (e.g., cancer diagnostics, time-series forecasting).
Prerequisites, Format, and Workload
- Prerequisites: EE 4940 - Special Topics in Electrical and Computer Engineering recommended but not required.
- Format: In-person lectures; no required textbook---lectures and notes via Canvas LMS.
- Workload: 3-credit lecture course (~9 hours/week total).
- Weekly/biweekly homework.
- 4 "mini-projects" involving deeper applications of course concepts.
- One midterm exam (open book/notes) and one final exam (open book/notes).
Course Goals and Objectives
By the end of the course, students will:
- Have a working knowledge of deep neural network architectures (DNNs, CNNs, RNNs).
- Understand training, validation, and testing processes, gradient descent, backpropagation.
- Apply models to diagnostic imaging, time-series forecasting, and other real-world problems.
- Explore dimensionality reduction and see how DNNs can solve scientific problems (e.g., approximating solutions to Maxwell's equations).
- Gain a basic introduction to generative AI and reinforcement learning.
Bloom's Taxonomy Levels and the Role of Generative AI
Below is how this course---and its use of generative AI---aligns with each cognitive level of Bloom's Taxonomy: Remember, Understand, Apply, Analyze, Evaluate, and Create.
1. Remember
Knowledge & Skills
- Recall fundamental definitions of artificial intelligence, neural networks, activation functions, backpropagation, and key historical milestones (e.g., Hopfield networks, Hinton's contributions).
- Identify standard Python libraries and deep learning frameworks (NumPy, SciPy, Matplotlib, PyTorch, TensorFlow).
Generative AI in Action
- Metaphors and Analogies: The AI tool will generate simple, memorable analogies (e.g., "A neural network layer is like a network of interconnected 'filters'---just like the lenses in glasses that each focus on a different part of the scene."). These real-time analogies help students quickly remember and anchor new terminology.
- Content Generation: AI-powered flashcards and micro-quizzes will be automatically created to help students memorize key terms, layer types, and function names.
2. Understand
Knowledge & Skills
- Interpret how DNNs learn internal representations of data.
- Summarize the logic behind training/validation/test splits and how loss functions guide model optimization.
- Explain the significance of convolutional filters in CNNs and gating mechanisms in RNNs.
Generative AI in Action
- Stories and Lessons: AI-generated narratives and role-play scenarios (e.g., "Geoffrey Hinton explains to a young Demis Hassabis how hidden layers revolutionized speech recognition") immerse students in historical contexts. These stories make key concepts in deep learning more relatable, fostering deeper understanding.
- Real-Time Clarifications: When students pose questions, AI can craft new explanations or analogies on the spot---ensuring that confusing topics are re-framed until fully understood.
3. Apply
Knowledge & Skills
- Use Python and popular libraries to build, train, and test basic DNN architectures.
- Implement CNNs for image-based classification tasks and RNNs for time-series predictions.
- Employ data preprocessing techniques (normalization, reshaping) and iterative model refinement in mini-projects.
Generative AI in Action
- Content Generation: Interactive practice problems and short coding challenges adapt to each student's progress. If a learner struggles with CNN kernel concepts, AI will generate targeted exercises.
- Simulations and Animations: Students use AI-driven virtual labs to manipulate hyperparameters and watch real-time changes in learning curves, explore "zoomed-in" views of convolutional filters, or tweak RNN cell parameters to see how model predictions shift---bridging theory and practice.
4. Analyze
Knowledge & Skills
- Compare different network architectures (DNN, CNN, RNN) for specific applications, examining performance metrics (accuracy, precision, recall, F1 scores).
- Investigate model failures or anomalies, diagnosing potential issues like overfitting or underfitting.
- Break down real-world datasets (e.g., MHIST for cancer diagnosis) to identify patterns and design suitable learning strategies.
Generative AI in Action
- Learning Activities and Projects: AI proposes incremental "mini-projects" that require critical data analysis---such as comparing performance across multiple model architectures or investigating feature maps in CNNs. The AI offers real-time hints, ensuring students learn systematic analysis methods.
- Adaptive Feedback: As students debug their models, AI-powered suggestions highlight where and why training might be failing, prompting reflection on data splits, loss function choices, or architectural mismatches.
5. Evaluate
Knowledge & Skills
- Assess the reliability, interpretability, and ethical considerations of deep learning solutions.
- Critique model architecture decisions, optimization techniques, and the trade-offs between computational cost and accuracy.
- Determine when alternative approaches (e.g., simpler regressions or decision trees) might be more suitable than deep networks.
Generative AI in Action
- Assessment and Feedback: AI-driven grading provides personalized feedback loops that go beyond numeric scores. Students receive suggestions (e.g., "Try re-evaluating your dropout strategy. Are you balancing variance and bias properly?"), enabling them to evaluate and refine their approach.
- Peer Review Simulations: Generative AI simulates a peer-review environment, providing constructive critiques of project proposals or final results---students practice giving and receiving evaluations in a supportive, simulated setting.
6. Create
Knowledge & Skills
- Design and implement original neural network architectures for novel data domains.
- Experiment with generative models (e.g., Variational Autoencoders or Generative Adversarial Networks) and propose new applications for them.
- Formulate and conduct reinforcement learning experiments for simple control or decision-making problems.
Generative AI in Action
- Learning Activities and Projects: The AI tool auto-generates scaffolded project suggestions that encourage students to develop new solutions---"Propose a CNN variant for biomedical image segmentation," or "Design an RNN-based text predictor with user-defined constraints."
- Formative Feedback: Throughout the creative process, the AI continually checks code, results, and design decisions, prompting deeper critical thinking. This fosters an iterative cycle of hypothesis-testing and refinement---helping students truly create and innovate.
How Generative AI Personalizes the Learning Journey
-
Metaphors and Analogies
- Ongoing generation of simple and relatable comparisons for abstract concepts, ensuring students never get stuck on jargon.
-
Stories and Lessons
-
AI-developed narratives infuse historical milestones (Hopfield nets, Hinton's backpropagation breakthroughs, Hassabis's achievements) with character-driven storytelling.
-
Content Generation
-
Adaptive quizzes, customized practice problems, and circuit-like diagram walk-throughs (adapted to deep learning diagrams) grow alongside the student's evolving competence.
-
Simulations and Animations
-
AI-powered interfaces let students see real-time "inside" changes as a model learns---activations, gradients, confusion matrices---supporting interactive exploration.
-
Learning Activities and Projects
-
Scaffolded tasks guide novices from basic classification examples to more advanced challenges---like building an autoencoder for dimensionality reduction or a basic reinforcement learning agent.
-
Assessment and Feedback
-
Detailed, iterative feedback loops ensure students understand why a concept or code block fails, equipping them to self-reflect, iterate, and improve.
Grading Overview
Deliverable | Weight |
---|---|
Homework Assignments | 40% |
--- | --- |
Mini-projects | 40% |
Midterm Exam | 10% |
Final Exam | 10% |
Note: Late submissions (without legitimate justification) receive up to 80% credit. Exams are open-book/notes and may include take-home components.
Conclusion
By integrating Bloom's Taxonomy with generative AI-powered tools, EE 4940: Introduction to Deep Learning aspires to spark curiosity and deep engagement. Students will remember core principles, understand architectural choices, apply techniques to real data, analyze model results, evaluate ethical and practical considerations, and ultimately create new solutions in the rapidly evolving field of deep learning. This comprehensive and interactive approach---fueled by dynamic AI metaphors, narratives, simulations, and personalized feedback---ensures that each learner can explore and master the foundational and forward-looking aspects of modern neural network technology.
o1
O