Leveraging and Managing AI in Education Today and into the Future

A conversation with Forough Ghahramani and Florence Hudson - originally published on the Springer Nature Research Communities website on September 24, 2025.

What drew you each to such varied topics of work/study and how did you find yourself where you are today?

Forough Ghahramani: I’ve always followed my curiosity, and that’s taken me through a varied journey across biology, math, computer science, software engineering, academia, entrepreneurship, and now into AI and quantum. I started out passionate about science, biology and math especially, and added computer science during my graduate program, which opened the door to early work in high-performance computing. I still remember using punch cards on the IBM S/360, then watching computing evolve rapidly, from minicomputers during graduate school to 64 bit systems, open source computing environments, the internet, search engines. My early career included working with operating system development for proprietary Virtual Memory Management System (VMS) at Digital Equipment Corporation (DEC), then moved on to Unix engineering, performance management and benchmarking and migrating applications from 32-bit to 64-bit systems.

One of the most rewarding phases of my career was as a technical consultant, where I got to see how the systems I helped build were applied across industries including, pharmaceuticals, biotech, healthcare, finance, manufacturing, and even steel manufacturing. Working as a systems architect on the Human Genome Project brought everything full circle. It tied my backgrounds in biology, mathematics, and computing into one meaningful direction. I became fascinated by the field of bioinformatics, and I fully reinvented myself in that area. I went from being a traditional software engineer to running my own biotechnology consulting company, which exposed me to the world of entrepreneurship and its unique challenges and rewards.

Over time, I came to view technology as much more than a tool, it became a vehicle for discovery, innovation, and transformation. That entrepreneurial spirit eventually led me to academia, where I found joy in teaching, mentoring, and launching new programs that bridge industry and education. I’ve always been driven by a love of learning and innovation, and even when career shifts weren’t intentional, sometimes guided by industry shifts, life phases or family needs, they added depth and diversity to my skill set and opened my interest to new areas.

I went back to school a couple of times, one earlier in my career to get my MBA and then later in life I earned my doctorate, something that brought together my leadership work in higher education and my lifelong commitment to continuous growth. I first learned about AI in the 1980s and have worked with big data and HPC for over 30 years, but what fascinates me today is seeing AI enter the mainstream and imagining its future alongside quantum technologies. My experience in both industry and higher education, always on the leading edge,  has allowed me to live four very different careers, and I’m still energized by what lies ahead.

In my current role, as Vice President for Research & Innovation at NJ Edge,  I work with higher education leaders as they are developing the strategy to support research through advanced and emerging technologies, including AI, high performance computing, and quantum.

While much has changed in my career journey, what has stayed constant includes a problem-solving mindset, a hunger to grow, and a strong sense of what matters to me at any given time. Education has played a central role in shaping opportunities throughout my life, and I’m a firm believer in giving back. As an engineer and advocate, I’ve worked to encourage young people, especially girls, to pursue STEM fields, often speaking to students from K–12 through college to help spark interest and confidence in science and technology. Another aspect of giving back is serving on advisory boards of two of my alma maters, Penn State University college of Science and University of Pennsylvania. Involvement in professional organizations such as IEEE and Society of Women Engineers has also provided opportunities for community engagement.

Florence Hudson: I always loved math and science from a young age. When I was a young girl my brother would wake me up to watch NASA spaceflight missions on TV which I thought were so cool. One day I asked “how do they do that?” That’s when I began thinking like an engineer.

As an engineer and a scientist, I have insatiable curiosity, plus I love to create things and fix things whether for business, technology, research, government, policy, humans, or society. Basically, I follow my curiosity, identify challenges to address, and apply my thinking and all types of technology to help solve problems. It’s a never-ending opportunity as the problems change as do the technologies and solutions available to address them. I believe our opportunity while we are on this earth is to identify the unique gifts we each have and use them for good everyday. That is what I strive to do across all domains that interest me, from data science to all types of engineering, sciences, knowledge networking, cybersecurity, standards, societal challenges, education, outreach and more.

As my educational and professional careers unfolded, I worked for NASA and the Grumman Aerospace Corporation while earning my aerospace engineering degree. I loved aerospace engineering, but the lead time from research to launch was decades and funding was declining. Computing and information technology was growing, so I expected that computers would run the world some day and I went into computing. My first job in computing was at Hewlett Packard. Then I enjoyed a long career at IBM where I was able to apply technology to all sorts of societal, business and technical challenges from an initial sales role to eventually becoming an IBM Vice President of Strategy and Marketing and Chief Technology Officer.

After retiring from IBM in 2015, I became a Senior Vice President and Chief Innovation Officer at Internet2 in the research and education world, and then worked for the NSF Cybersecurity Center of Excellence at Indiana University. In 2020 Columbia University asked me to lead the Northeast Big Data Innovation Hub after I had been on the advisory board since 2015 working on their overall strategy and cybersecurity initiatives, so it was a natural fit to become Executive Director. I had also started my own consulting firm (FDHint, LLC) as CIOs were asking me to consult with them. I have also served on over 18 corporate, advisory and steering boards - from NASDAQ-listed companies to academic and non-profit entities.

My passion for cybersecurity is a key focus of mine. It started in my early days as an aerospace engineer working on defense projects. At IBM I worked on security initiatives in servers and solutions, and continued the focus working for the NSF cybersecurity center of excellence at Indiana University. This led to my leading the development of the IEEE TIPPSS standard to improve Trust, Identity, Privacy, Protection, Safety and Security for clinical IoT (Internet of Things) which won the IEEE Emerging Technology Award in 2024. Springer has published two of my books on TIPPSS.  I am currently Vice Chair of the IEEE Engineering in Medicine and Biology Society Standards Committee, and lead a TIPPSS roadmap task group which has spawned a new IEEE standard working group on AI-based coaching for healthcare with TIPPSS - Trust, Identity, Privacy, Protection, Safety and Security. TIPPSS is being applied in other domain areas as well, including large experimental physics control systems, energy grids and distributed energy resources. TIPPSS is envisioned to apply to all cyberphysical systems.

In what ways do you think your own educational/academic/career path might have been different if you started in today’s climate?

Forough Ghahramani: If I were starting my academic and professional journey today, I think it would have looked quite different, maybe not in direction, but in pace, access, and mindset. When I was starting out, computing was a specialized, niche field. Physical access to machines, time on shared systems, and a lot of patience was necessary. Today, a high school student can access cloud-based computing resources, learn to code from YouTube, and contribute to open-source projects from their bedroom. That kind of accessibility changes everything. With AI, cloud computing, and real-time collaboration platforms now core to both education and work, the barriers to accessing knowledge and innovating early have dramatically lowered.

With today’s startup opportunities, accelerators, and online communities, I probably would have embraced entrepreneurship sooner. I also imagine I would have engaged with more interdisciplinary learning earlier on, because today’s educational environment really encourages learning across domains. AI, data science, and quantum computing would have pulled me in even faster given my background and propensity, but I would have had to be more intentional about focus, given today’s information overload can be overwhelming.

I think my motivation and values would be the same. I’ve always been driven by curiosity and the desire to connect ideas across fields. What has changed is that today’s climate rewards that type of thinking more openly, and it provides more tools to act on it faster.

Florence Hudson: I think if I were to start my educational and professional career today I might have stayed in aerospace engineering longer as there are many more job opportunities with government and commercial space organizations, and faster transition of the research to practice. When I was an Aerospace and Mechanical Engineer at Princeton University and was working on future missions around Jupiter during a NASA summer internship, they said my summer internship project would take 18 to 20 years to come to fruition. That’s a long time! That’s when I decided to go into Information Technology (IT). Now there is a much faster path from research to execution in aerospace, and many more jobs, thereby broadening and accelerating opportunity and impact.

Being involved in both technology and education, do you see more risk with the technology itself (misinformation, bugs, security) or how it is applied in the educational landscape (with complicated policies, uneven funding, inequalities)? Or a combination?

There is risk in both AI technology itself as well as how it is used in the educational landscape.

To think more broadly, we must consider that the educational landscape of AI includes everyday use and education for all citizens - not just educational institutions. Openly available AI-enabled systems from Large Language Models (LLMs) like ChatGPT to everyday devices using AI to make suggestions that may be incorrect, are affecting the education of our citizens, students, teachers and professionals. If an educator, professional or parent is provided incorrect information and they teach others or take actions with incorrect information, the incorrect recommendations by AI can have a broad negative impact. We must aspire to limit that negative impact.

There is also a risk with users sharing information in AI tools that are meant to be kept private, whether they are private citizens or professionals in industry or government. The AI tools may leverage information used to ask questions of AI to add to the corpus of content used to answer questions for other users, thereby risking privacy and security of shared information. This risk applies to all humans and institutions asking questions of AI tools as their questions provide context and content that can be used by the AI tool more broadly.

In educational systems and institutions, AI has the risk of providing incorrect information so students and teachers may be learning things incorrectly, which will proliferate to others they speak with or teach. AI is creating a false sense of comfort that it knows the right answer, without people questioning or vetting it. It makes it easy for people to stop thinking. Many people want to let the AI think “for” them, and many people do not bother to check if it is right or wrong. This is a real danger.

Technology, by itself, can be flawed, but the risks can be managed with good design, robust testing, responsible development and ongoing management. An important concern is when powerful technologies are layered on outdated systems, infrastructures, or unclear policies.

While we must continue to improve the technology itself, we also need to focus on the human, structural, and policy dimensions that determine whether technology helps or harms. If AI is deployed without thoughtful design and policy, and educator involvement, it can do more harm than good. The challenge isn’t just what AI can do, it is also who gets to use it and for what purpose.

Like any technology, there will be bugs and problems, but it’s when we abuse the power of AI that the risk to humans and institutions increases.

What, briefly, is the big picture landscape of AI and education, including key strengths and risks?

AI is being used in education already, by students, teachers, and administrators. Like any tool, it can be used for good or for bad.

AI is transforming education at every level, from K–12 classrooms to higher education and workforce training, by introducing new possibilities for personalization, real-time support, availability and scalability across the broad ecosystem of educational systems and institutions. Key strengths include the ability for AI to deliver adaptive learning experiences tailored to each student’s pace and style, automate time-consuming tasks like grading or feedback, and reveal data-driven insights that help educators intervene earlier and more effectively in student learning journeys. AI can provide a quick synopsis for students and teachers to be able to quickly ingest content, and even translate content across languages, generate visualizations to support complex thinking, and serve as a tutor, coach, or creative collaborator. It can enable teachers and administrators to analyze student and school data and metadata to identify patterns, anomalies and opportunities to make better decisions and improve processes to better enable student success.

But alongside these strengths are real risks. There are real concerns about authorship, academic integrity, privacy, and surveillance, especially when student data is collected without transparency or used to make high-impact decisions. The ease of generating text or code with AI raises philosophical and practical questions about what it means to learn, think critically, or create original work in an AI-augmented world. There's also the risk of over-reliance: students and educators may become dependent on AI to the point that foundational skills erode or motivation diminishes. It also may enable students, teachers and administrators to disconnect from the content and make less informed or human-centered decisions.

Striking the right balance means centering human agency and pedagogy in the design and deployment of AI tools. AI should serve as a support mechanism, not a substitute, for the relational, reflective, and exploratory aspects of education. This requires thoughtful policies, transparent use guidelines, educator training, and practical design that anticipates and avoids unintended consequences.

In what ways can AI be used to enhance/encourage learning rather than give students a way around it?

AI can be a powerful cognitive companion when integrated thoughtfully into the learning process. Rather than serving as a shortcut to answers, it can enhance learning by helping students form better questions, explore multiple perspectives, visualize abstract or complex ideas, and engage in iterative practice with immediate, personalized feedback. For example, intelligent tutoring systems can walk students through problem-solving steps, while AI writing tools can offer style and grammar feedback that encourages revision rather than doing the writing for them.

The real shift lies in moving away from a transactional learning mindset, where students are focused on getting the answer as efficiently or quickly as possible, toward a collaborative learning mindset, where AI acts as a coach, partner, or creative assistant in the learning process. In this context, students are not passive recipients of knowledge but active participants in the construction of their understanding. AI tools can model Socratic questioning, recommend readings based on prior gaps, or simulate real-world scenarios for application of skills.

When used this way, AI doesn’t replace learning, it scaffolds it. It gives learners room to explore, fail safely, reflect, and try again. That’s not just about keeping students honest, it’s about keeping them engaged, curious, and confident in their capacity to learn and grow.

How AI has impacted students’ attitudes towards education (from K12 to higher ed). Do they feel it’s less relevant? Or are they excited because it’s a tool they harness?

Students’ reactions to AI in education are mixed. Some see it as a big benefit in reducing their effort, thereby diminishing their perceived value of their own effort, (“Why write when AI can do it?”), while others see it as a superpower that enhances their creativity and efficiency. There is some skepticism around the use of AI by educators in the classroom. Much depends on how schools and educators frame AI, not as a crutch, but as a catalyst for inquiry, reflection, and application.

What areas outside of paper writing are changing and in what ways? 

AI is reshaping how students approach nearly every part of academic life. Lecture transcription and summarization tools (e.g., Otter.ai), AI-powered flashcard generators, and group collaboration platforms with embedded AI assistants are streamlining notetaking, study sessions, and project work. The learning ecosystem is becoming more modular, on-demand, and scaffolded by intelligent systems.

AI is changing how educators approach their roles as well. Some educators are requiring in-person test-taking for students with hand-written answers in the classroom, to avoid the use of AI and ensure they know what the students are actually learning and understanding. The use of AI can limit critical thinking, which is a risk to society, academia and science. Managing the use of AI to ensure real learning may be an ongoing challenge into the future.

How has AI changed how students attend lectures and take notes, study, do group work, etc.? 

AI is rapidly reshaping how students engage with learning, from the way they attend lectures to how they study and collaborate. Tools like Otter.ai and Notion AI allow students to focus on understanding rather than taking frantic notes, offering real-time transcription, summarization, and translation to support diverse learners. AI-enhanced note-taking apps can organize content, generate highlights, and even answer follow-up questions, turning notes into interactive study companions. When it comes to studying, platforms like Khanmigo and Quizlet deliver personalized learning experiences by creating adaptive quizzes, tutoring simulations, and targeted study plans based on students' evolving needs.

Group work has also become more efficient with the help of AI-powered tools that support brainstorming, project planning, and communication, especially in remote or multilingual settings. Perhaps the most significant shift is in mindset: with AI handling many of the routine academic tasks, students are free to focus on deeper learning, critical thinking, and strategic problem-solving. Ensuring consistent and broad availability of AI tools, training, and infrastructure is essential to enable these advancements to enhance learning for all students.

How can AI be implemented without widening the digital divide between well-resourced and under-resourced schools?

To implement AI in education without widening the digital divide, we need to treat broad availability as a design requirement, not an afterthought. AI tools need to be available to the broad community of schools and learners whether they have ample resources or limited resources related to high-speed internet, infrastructure, and trained staff, or we risk creating uneven opportunities for growth across the broad student population. A set of suggested actions are included below.

  • Prioritize broad availability and low-bandwidth tools - Develop and adopt AI tools that work offline or with minimal internet connectivity. Many students still rely on shared devices or limited data plans, so tools must be optimized for use on mobile devices, with offline functionality, and in resource-constrained environments. Open-source platforms and lightweight AI models can play a critical role here.
  • Invest in educator training across all settings - Professional development opportunities must be extended to educators across the broad landscape of schools, both well-resourced as well as under-resourced schools, so they can all have the opportunity to understand, evaluate, and effectively use AI. It’s not just about the tools, it’s about empowering educators to integrate them meaningfully and thoughtfully into their classrooms.
  • Embed broad AI enablement in policy and funding - Policymakers and funders could tie grants and procurement to technology goals across a wide array of schools and communities to incentivize AI use and adoption. For example, federal and state programs could subsidize AI deployments or provide incentives for companies to co-design tools in a range of communities.
  • Promote Public-Private-Partnerships (PPPs) in and across communities - AI adoption should be accompanied by partnerships that bring together schools, community organizations, libraries, universities, and industry. These partnerships can support infrastructure upgrades, shared use of cloud resources, or mentorship programs that extend beyond the school walls.
  • Focus on student-centered AI - Instead of deploying AI only for administrative efficiency (e.g., grading automation or test prep), educational institutions and funders could invest in tools that support learner growth, curiosity, and agency, tools that work just as well for a student in a rural district as one in a top-performing urban school.

In summary, if we approach AI as a tool for both effectiveness and efficiency, and ensure community voices are part of the process from the beginning, it can help close, not widen, the digital divide.

This Nature article discusses using a document’s version history to deter AI usage in writing. Have you heard of other techniques or ideas involving technology? 

The method proposed in the Nature article includes reviewing incremental edits over time and can reveal whether a document was developed iteratively or was included as a fully polished piece, which can be a potential flag for large language model use. Version history is just one part of a growing set of tools. Other techniques include technological, pedagogical, and procedural based approaches.

Technological approaches use software and systems to detect or deter AI-generated content by analyzing how text is created or submitted. Examples include Turnitin AI Detection, which detects plagiarism and AI use. Another example is OpenAI watermarking that includes subtle signature embedding.

In the pedagogical approach, educators redesign assignments and assessments to emphasize critical thinking, originality, and personal connection, which are harder for AI to simulate. Students are taught how to use AI responsibly as a learning enhancer. Examples include Otter.ai for lecture summarization and study support, and custom AI Reflection Assignments such as comparing ChatGPT outputs with human-written drafts.

Procedural approaches include institutional or classroom policies that govern when and how AI can be used, often relying on transparency, documentation, and updated honor codes. Canvas LMS with audit trail and version control features is one example.

While advancements are being made in detection tools, none are foolproof. Ethical dilemmas can be created due to false positives, especially when punishments are incurred without clear evidence. Institutions will need clear, transparent, and fair AI use policies, combined with student education and faculty development.

As AI becomes widely adopted across industries, education will need to ultimately shift from a suspicious stance to one of guided integration. Maintaining integrity may involve detection and deterrence, however approaches will also need to include trust-building and authentic assessment.

Can you share an example of a study or project where AI significantly improved learning outcomes?

The University of Michigan's “Maizey”, a customized generative AI tool, is trained on specific course materials to provide personalized assistance to students. Positive results in student performance and engagement have been reported. It has also increased efficiency for both instructors and students. For example, in an Operations Management course at the University of Michigan Ross School of Business, the tool  saved instructors 5 to 12 hours of answering questions each week. For students, the tool provided an ability to ask questions beyond the classroom or a professor’s office hours. According to self-reported surveys, improvements in assignment and quiz scores were shown. This is a small but significant step in scaling personalized support.

Should English and coding still be taught in the same format? For instance, how would one teach a CS student the value/quality of code when it’s generated by AI? Will the field/study be more about prompts rather than writing code?

While English and coding will continue to be foundational, how they are taught needs to evolve.

The shift for English instruction may involve multiple facets. For instance, we envision there will be a move towards developing skills in discovering available information leveraging AI tools, and vetting it for accuracy. Beyond leveraging available information, more focus on creating and producing new information, learning to make informed judgments as users of information, leveraging AI for writing with critical oversight, and ethical writing will be important. With AI bringing basic information to our fingertips, an increased focus on creative thinking and creative information development, analysis, data storytelling and data visualization will be valuable.

For coding, the emphasis will need to shift from syntax mastery to a focus on problem-solving, critical thinking, and the ability to adapt and improve AI-generated code.  The shift will be from writing code from scratch to evaluating, refining, and architecting systems with assistance of AI. While prompts will be important, the emphasis in training will need to include how to critically assess and improve code in lieu of simply generating it.

With the tech industry leveraging AI to reduce the cost of employing developers, does this impact young people's interest in studying CS? 

Despite automation and computing advances, and perhaps fueled by them, Computer Science (CS) remains a dynamic field. From the early days when the focus in CS classes was writing a compiler, to the evolving focus on AI and now Quantum Computing, computer science grows with the evolution of innovative technologies and their applications.

Regarding software development, with the advent of automation leveraging AI, some students may gravitate toward areas where they feel human agency remains central including AI ethics, security, data science, human-computer interaction, data story-telling and data visualization. Some may shift from coding to prompt engineering, but the underlying logic, structure, and systems thinking are still core competencies.

Will Google and Stackover flow in their current forms become irrelevant? 

Google and Stack Overflow may not vanish, but they will evolve. AI systems trained on forums like Stack Overflow already offer contextualized responses. However, the social and pedagogical value of such platforms, seeing multiple solutions, peer validation, and community norms, remains important. The future may see integration rather than obsolescence.

How do you believe those on the technical side of AI can fight against the threats to education?

Technologists can choose to embrace rather than fight AI, as it is here and will be here for a while. They can choose to embed ethical guardrails into AI tools and their use, advocate for transparent systems, and co-design with educators. They can support open infrastructure and prioritize broad usability. Perhaps most importantly, they must acknowledge that technological literacy is also civic literacy in the AI age.

Perhaps the real opportunity is to see AI as a tool to help critical thinking and creativity grow, using AI tools to provide a baseline in thinking, with humans using that as a springboard for more creative and imaginative thinking and innovation.

How do you envision the educational and academic landscape in 5 years, 10 years, 20 years?

In the near term, AI will likely be woven seamlessly into the day-to-day fabric of education. AI-powered tools will assist with personalized learning pathways, real-time feedback, and multilingual content delivery. Adaptive platforms will support differentiated instruction, helping students master concepts at their own pace while offering educators rich insights into individual progress. Faculty and students will routinely use AI for brainstorming, tutoring, lab simulations, and writing assistance. Microcredentials and skills-based learning, especially in areas like AI literacy, ethics, and data fluency, will grow rapidly, both inside and alongside traditional degree programs. Efforts in leveraging real insights to improve and further the science of AI will grow, and the application of AI in science, engineering and other disciplines will increase.

The evolution of trust and use of AI in education will likely have to evolve in order to harness its value and mitigate the risks it introduces. As mentioned above, some educators are requiring in-person test taking with hand-written answers in the classroom to confirm what the students are actually learning and understanding rather than using AI to answer the tests. The use of AI can limit critical thinking, which is a risk to society, academia and science. Managing the use of AI to ensure real learning may be an ongoing challenge into the future.

In 10 years: The classroom may be less bound by physical walls or static schedules. We could see AI agents co-teaching with human instructors managing formative assessments, generating tailored lesson variations, and supporting students in multiple languages and learning styles. Augmented Reality (AR) and Virtual Reality (VR) will likely be commonplace in STEM labs, medical training, and arts education, offering fully immersive simulations and collaborative experiences. Interdisciplinary programs that combine computing, humanities, ethics, and policy will become the norm, responding to the needs of an AI-shaped world. Institutions may start awarding modular degrees that reflect personalized learning trajectories, not just traditional majors.

In 20 years: The boundary between formal and informal education may blur almost completely. AI-powered tutors will likely be embedded in the tools and environments students use daily, including their personal digital devices such as smartphones, wearables, home assistants, or AR glasses. Learning may happen anywhere, anytime, guided by intelligent agents that adapt not just to what learners know, but how they feel, what motivates them, and where they struggle. Credentials may shift from degrees tied to credit hours to skills portfolios based on demonstrated mastery, verified through performance in real-world simulations or digital apprenticeships. Lifelong learning will no longer be optional. It will be dynamically integrated into professional life through just-in-time learning pathways driven by AI.

Throughout this transformation, human educators will remain essential. Their roles may evolve from content deliverers to mentors, curators, and ethical stewards of technology, but their presence will be more critical than ever in guiding values, fostering community, and ensuring that learning remains deeply human, not just algorithmic.

How can research publishers help in AI and education?

Broadly, research publishers can help in AI and education by inspiring and publishing all sides of the AI story - the good, the bad, and the ugly - like Springer did with this invited blog. Allow everyone to learn from others through your publications.

Research publishers also have a role in ensuring that AI is used responsibly in scholarly communication, including setting norms around disclosure of AI usage, enabling reproducibility through shared datasets and code, and fostering interdisciplinary research that explores AI's impact on pedagogy, as well as on all people, institutions and systems using AI or who may be impacted by the use of AI.

One question above was written by AI. Can you guess which one?

We are not sure, however, this seems like a fitting end. AI is now both the tool and the topic, the assistant and the questioner. And perhaps that’s the most important takeaway: we are all co-authors in this unfolding story.

A guess is the “Google and Stackover flow” question based on the fact that it should be Stack Overflow.


Florence Hudson is Executive Director of the Northeast Big Data Innovation Hub at Columbia University and Founder & CEO of FDHint, LLC, a global advanced technology consulting firm. A former IBM Vice President and Chief Technology Officer, Internet2 Senior Vice President & Chief Innovation Officer, Special Advisor for the NSF Cybersecurity Center of Excellence, and aerospace engineer at the NASA Jet Propulsion Lab and Grumman Aerospace Corporation, she is an Editor and Author for Springer, Elsevier, Wiley, IEEE, and other publications. She leads the development of global IEEE/UL standards to increase Trust, Identity, Privacy, Protection, Safety and Security (TIPPSS) for connected healthcare data and devices and other cyber-physical systems, and is Vice Chair of the IEEE Engineering in Medicine & Biology Society Standards Committee. She earned her Mechanical and Aerospace Engineering degree from Princeton University, and executive education certificates from Harvard Business School and Columbia University.

Forough Ghahramani is Vice President for Research and Innovation for New Jersey Edge (Edge).  As chief advocate for research and discovery, Forough serves as an advisor and counsel to senior higher education leaders to help translate vision for supporting research collaborations and innovation into actionable Advanced CI strategy leveraging regional and national advanced technology resources. Forough was previously at Rutgers University providing executive management for the Rutgers Discovery Informatics Institute (RDI2), working with Dr. Manish Parashar (Director). Forough's experience in higher education also includes previously serving as associate dean and department chair. Prior to joining academia, she held senior level engineering and management positions at Digital Equipment Corporation and Hewlett Packard (HP), also consulted to Fortune 500 companies in high performance computing environments. Forough is a Senior Member of IEEE, has an appointment to the NSF Engineering Research Visioning Alliance (ERVA) Standing Council, a Vice President for NJBDA's Research Collaboration committee, serves on the Northeast Big Data Innovation Hub and the Ecosystem for Research Networking (ERN) Steering committees. Forough has a doctorate in Higher Education Management from University of Pennsylvania, an MBA in Marketing from DePaul University, MS in Computer Science from Villanova University, and BS in Mathematics with a minor in Biology from Pennsylvania State University. Forough is consulted on the state, national, and international levels related to STEM workforce development strategies. She is currently a Principal Investigator on two NSF-funded projects: “EAGER: Empowering the AI Research Community through Facilitation, Access, and Collaboration” and “CC* Regional Networking: Connectivity through Regional Infrastructure for Scientific Partnerships, Innovation, and Education (CRISPIE)”, and a co-PI on the NSF ADVANCE Partnership: “New Jersey Equity in Commercialization Collective.” She has previously served a co-PI on the NSF CC* OAC: Planning “Advanced Cyberinfrastructure for Teaching and Research at Rowan University and the Southern New Jersey Region” and the NSF CCRI: Planning “A Community Research Infrastructure for Integrated AI-Enabled Malware and Network Data Analysis”.

The original article can me found here »