Past CESTA Seminars

Repository of Past CESTA Talks

Fall 2019

October 8, 2019

Transient Electronics and Digital Sensor Systems

John Rogers, Northwestern University

A remarkable feature of modern integrated circuit technology is its ability to operate in a stable fashion, with almost perfect reliability, without physical or chemical change. Recently developed classes of electronic materials create an opportunity to engineer the opposite outcome, in the form of ‘transient’ devices that dissolve, disintegrate or otherwise disappear at triggered times or with controlled rates. Water-soluble transient electronics serve as the foundations for interesting applications in zero-impact environmental monitors, 'green' consumer electronics and bio- resorbable biomedical implants. This presentation describes the foundational concepts in the materials science and assembly processes for bioresorbable electronics and digital sensing systems. Wireless monitors of intracranial temperature, pressure and electrophysiology for use in treatment of traumatic brain injury and nerve stimulators configured for use in accelerated neuroregeneration provide application examples.


October 15, 2019

Distributed Sensing: When are our sensors too small to be smart?

Joey Talghader, University of Minnesota

Zoom Recording

Distributed sensor networks are proliferating in products and systems through the economy, and their technological capabilities are rising at a rapid rate. At present, most sensors are either directly connected to the networks of which they are a part, for example in automobiles, or they are large and complex enough that they have on-board power and can connect periodically to the global telecommunication system, for example in remote weather stations. These types of sensors are often called "smart" sensors because they can take advantage of power supplies, communications devices, microelectronic data processing, software, and other resources that are part of their individual unit or the system to which they are directly or indirectly connected.

However, future systems may have sensors that travel passively, say by fluid flow, wind, or water and, further, have dimensions of a few microns or less. Such sensors cannot be “smart” in the traditional way we define the term. The difficulty is not merely miniaturization; instead, there are fundamental issues of diffraction for remote communication and volume power density for batteries or photovoltaic cells that prevent independent operation.

This talk will discuss these issues and present two examples of distributed sensors that must operate in environments where smart sensing is difficult. The first sensors are semi-autonomous particles for sensing metal-ion concentration in fluids. The devices were designed to be released in numbers to collect statistical data as they flow through microchannels. They incorporate a monolithically integrated photovoltaic (PV) power supply and use a resonant cantilever mass sensor to detect electrodeposited metal at the tip of a cantilever. Individual devices correctly predict, within about a factor of two, the metal ion concentration even when operating off of scavenged light from a room lamp. The current sensors operate at powers of about 50nW, and are integrated into a total volume below 0.046mm3. The second sensors are also particles but designed to measure temperature inside explosions, perhaps the harshest environment on earth. The sensor particles are embedded in or around an explosive device and disperse with the explosion. The thermoluminescence (TL) of the oxide microparticles gives direct information on temperature and time because the trapped charges that ultimately give rise to TL have a probability of detrapping that follows an Arrhenius-type relationship. The effects of maximum temperature on the intensity ratios of various luminescent peaks have been compared with first-order kinetics theory and predict temperatures to within 5% or better.

These types of passive, “dumb” sensors are absolutely necessary in certain critical environment but present challenges of incorporation into traditional distributed networks. We hope to contribute to CESTA by addressing some of these challenges.


October 22, 2019

Detecting plant biodiversity across scales using hyperspectral sensors

Jeannine Cavender Bares

Biologists are increasingly using remote sensing technology in research and applications. My collaborative team has been using hyperspectral sensors—on hand held devices and mounted on UAV platforms, aircraft and soon on satellite—to detect plant function, identity and disease from the leaf level to the landscape scale. A central goal of this work is to detect changes in plant function, composition and biodiversity over time. We have shown that plant spectral diversity can predict plant diversity and plant productivity with appropriate spatial resolution. We are also finding that remotely sensed vegetation chemistry predicts belowground soil properties and processes. A central focus of our current work is developing models from hyperspectral data to detect oak wilt in Minnesota forests—a devastating threat to oak trees, if left unmanaged. Early and accurate detection enables more efficient management. We are interested in developing sensor arrays that are inexpensive and light weight that can be mounted on UAVs for the purpose of detecting oak wilt and potentially other tree diseases to enhance rural and urban forest management. The project shows high potential, and we are open to new collaborative opportunities with CESTA partners.


November 19, 2019

A day in the Life of a Data Scientist at Seagate

Dr. Nicholas Propes and Dr. Zhiqiang Xing

What is it like being a Data Scientist at Seagate? Seagate Data Scientists Dr. Nicholas Propes and Dr. Zhiqiang Xing will talk about their job as a Data Scientist, the type of projects they work on, technical challenges they face and techniques they often employ to overcome technical challenges. They will also talk about upcoming trends and technologies in data science.


December 10, 2019

Brain Machine Communication

Zhi Yang, Department of Biomedical Engineering, UMN

We study and implement emerging brain technologies. In this talk, I will first introduce our research. I will share demos and patient interviews to explain how our research could change their life. I will then discuss the challenges, recent innovations, and the next-step plan towards translating the research into clinical prototypes. Finally, I will present our ongoing works and discuss future directions for innovating the next generation neural modulation technologies and their upcoming human clinical applications in a broader context.


Spring 2019

February 12, 2019

Photoacoustic Imaging (Ashkenazi) and Polarized Light Imaging (Akkin) for Biomedical Applications

Shai Ashkenazi and Taner Akkin, Associate Professors of Biomedical Engineering, University of Minnesota

Part I: The field of Photoacoustic Imaging (PAI) for medical and biological applications has changed dramatically in the past two decades. It has evolved from a bulk absorption spectroscopy technique for sample analysis into a high resolution imaging modality. It is a change similar to the evolution of Magnetic Resonance Spectroscopy (MRS) into Magnetic Resonance Imaging (MRI). However, as opposed to the rapid adoption of MRI in medical diagnosis, PAI is still not in use in clinics. The reasons may be insufficient depth penetration, cost (relative to alternatives), bulky laser systems, and challenging engineering design of light and ultrasound delivery. Yet, the attraction of PAI is its ability to embed optical tissue properties in a plain ultrasound image. This way extending ultrasound imaging to functional and molecular imaging modality. Dr. Ashkenazi will introduce the basic principles of PAI and then move to explore different mechanisms of contrast that can be implemented in PAI. Primarily, he will focus on using transient absorption and triplet-triplet absorption as a potential contrast for PAI in medical applications.

Part II: Polarization is an essential but underutilized property of light. Imaging systems that are capable of making polarization-sensitive measurements require careful design and in some cases use of sophisticated analysis methods. Dr. Akkin will present optical coherence tomography (OCT) based depth-resolved tissue imaging for which the intensity, phase and polarization properties of backscattered/reflected light are analyzed to extract various imaging contrasts. This label-free imaging and mapping technique will be introduced for generating 3D optical tractography of whole-brain microconnectivity with serial sectioning. Also, he will present contrast enhancement by titanium dioxide perfusion, which enables visualization of the vasculature in cross-polarization images. Signal and image processing approaches as well as deep learning algorithms are being developed for better visualization and separation of the vascular and white matter networks.


February 26, 2019

Machine Learning and Citizen Science at Zooniverse

Derryl Wright, Postdoctoral researcher, School of Physics and Astronomy

As researchers gather ever larger data sets there is an increasing reliance on machine learning and demand for citizen science.  We will introduce Zooniverse, the world's largest citizen science platform and show how citizen science is helping researchers unlock meaningful information from the data they collect. We will also demonstrate how, classifications produced by volunteers, are enabling machine learning and that both citizen science and machine learning can empower each other to process data more efficiently than either alone.


March 5, 2019

Online Scalable Learning Adaptive to Unknown Dynamics and Graphs

Georgios Giannakis, Professor, ADC Chair in Wireless Telecommunications, Department of Electrical and Computer Engineering, Director of the Digital Technology Center, University of Minnesota

Kernel based methods exhibit well-documented performance in various nonlinear learning tasks. Most of them rely on a preselected kernel, whose prudent choice presumes task-specific prior information. Especially when the latter is not available, multi-kernel learning has gained popularity thanks to its flexibility in choosing kernels from a prescribed kernel dictionary. Leveraging the random feature approximation, this talk will introduce first for static setups a scalable multi-kernel learning approach (termed Raker) to obtain the sought nonlinear learning function ‘on the fly,’ bypassing the `curse of dimensionality’ associated with kernel methods. We will also present an adaptive multi-kernel learning scheme (termed AdaRaker) that relies on weighted combinations of advices from hierarchical ensembles of experts to boost performance in dynamic environments. The weights account not only for each kernel’s contribution to the learning process, but also for the unknown dynamics. Performance is analyzed in terms of both static and dynamic regrets. AdaRaker is uniquely capable of tracking nonlinear learning functions in environments with unknown dynamics, with analytic performance guarantees. The approach is further tailored for online graph-adaptive learning with scalability and privacy. Tests with synthetic and real datasets will showcase the effectiveness of the novel algorithms.


March 12, 2019

Deliberations on Scientific and Methodological Aspects of Machine Learning

Vladimir Cherkassky, Professor of Electrical and Computer Engineering at the University of Minnesota, Twin Cities.

Many diverse fields, such as applied mathematics, statistics, machine learning, data mining, econometrics, bioinformatics etc. are concerned with estimation of data-analytic models. More recently, due to the abundance of data and cheap computing power, machine learning (ML) algorithms have become very popular in various
applications, even though many such algorithms are heuristics vaguely motivated by biological (as opposed to mathematical) arguments. This disconnect (between mathematics and practical applications) may seem strange, given the deep intrinsic connection between mathematics, science and engineering. Well-known historical examples include Kepler’s Laws and (classical) statistical science. The purpose of my talk is to explain various reasons for the current disconnect, including (a) conceptual (philosophical) aspects; (b) technical (mathematical) aspects and (c) non-technical (social) aspects. In particular, my talk will elaborate on different interpretation of philosophical concepts (of deductive and inductive reasoning), in classical science, statistics and ML. This methodological difference will be further clarified via several basic assumptions underlying all ML methods – as presented in Vapnik-Chervonenkis (VC) learning theory. Further, I will discuss several ‘non-standard’ inductive problem settings (i.e., different from standard supervised learning) that enable better
generalization with finite training data.


March 26, 2019

Physics Guided Machine Learning: A New Paradigm for Modeling Dynamical Systems

Vipin Kumar, Regents Professor and William Norris Endowed Chair, Department of Computer Science and Engineering, University of Minnesota

Physics-based models of dynamical systems are often used to study engineering and environmental systems. Despite their extensive use, these models have several well-known limitations due to incomplete or inaccurate representations of the physical processes being modeled. Given rapid data growth due to advances in sensor technologies, there is a tremendous opportunity to systematically advance modeling in these domains by using machine learning (ML) methods. However, capturing this opportunity is contingent on a paradigm shift in data-intensive scientific discovery since the “black box” use of ML often leads to serious false discoveries in scientific applications.  Because the hypothesis space of scientific applications is often complex and exponentially large, an uninformed data-driven search can easily select a highly complex model that is neither generalizable nor physically interpretable, resulting in the discovery of spurious relationships, predictors, and patterns. This problem becomes worse when there is a scarcity of labeled samples, which is quite common in science and engineering domains.

This talk makes a case that in a real-world systems that are governed by physical processes, there is an opportunity to take advantage of fundamental physical principles to inform the search of a physically meaningful and accurate ML model.  Even though this will be illustrated  in the context of modeling water temperature, the paradigm has the potential to greatly advance the pace of discovery in a number of scientific and engineering disciplines where physics-based models are used, e.g., power engineering, climate science, weather forecasting, materials science, and biomedicine.


April 2, 2019

Recent Advances in Adversarial Machine Learning

Mingyi Hong, Assistant Professor, Department of Electrical and Computer Engineering, University of Minnesota

Recently, it has been observed that machine learning algorithms and models, especially deep neural networks, are vulnerable to adversarial examples. For example, in image classification problems, one can design algorithms to generate adversarial examples for almost every image with very small human-imperceptible perturbation. In this talk, we will give an introduction to recent advances in designing adversary examples for machine learning models. In particular, we will show that how different types of system design, and optimization methods, can be used to build powerful black-box adversarial attacks for existing machine learning models. Our focus will be given to generic algorithm design, as well as to  illustrating the connections and empirical performance of different approaches.


April 9, 2019

Electrolyte Gated Transistors with Floating Gates as Biosensors

Dan Frisbie, Distinguished McKnight University Professor and Head, Department of Chemical Engineering and Materials Science, University of Minnesota

Electrolyte gated transistors (EGTs) are a sub-class of thin film transistors that are extremely promising for biological sensing applications. These devices employ a solid electrolyte as the gate insulator; the very large capacitance of the electrolyte results in low voltage operation and high transconductance or gain when incorporated into an inverter architecture. This talk will describe the fabrication of floating gate EGTs and their use as protein sensors. The floating gate EGT (FG-EGT, or simply FGT) design allows separation of the semiconductor channel from the analyte capture surface. That is, two electrolyte compartments are employed, one that coats an arm of the floating gate and the semiconductor source-drain channel, and the other that coats the capture surface on the other arm of the floating gate and an electrically addressable control gate. The capture surface is coated with aptamers or antibodies that selectively bind the molecular target. Having two separate electrolyte compartments prevents contamination of the semiconductor with the analyte solution and allows optimization of the response of the device independent of surface chemistry. This talk will describe the fundamental operating principles of the FGT and its implementation in a sub-1 V differential amplifier sensor scheme for label-free protein detection. Prospects for generalizing the FGT platform to other analyte classes will also be discussed.


April 16, 2019

Using Space to Help Life on Earth: How the Small Satellite Revolution and AI are Transforming How We See and Understand Our World

Andrew Zolli, Vice President, Global Impact Initiatives, Planet

A revolution in low-cost, space-based remote sensing, combined with new analytical tools in machine learning, computer vision and artificial intelligence, are creating unprecedented new opportunities for tackling the world’s toughest challenges.

Andrew Zolli is the Vice President for Global Impact Initiatives at Planet (www.planet.com). Started by three NASA engineers, Planet has deployed the largest constellation of Earth-observation satellite in history. Together, these satellites image the entire surface of the Planet, every day, in high resolution. The resulting data holds transformational potential for basic science, and for a host of global challenges, including monitoring deforestation, agriculture and cities, tracking migration, mitigating the effects of climate change, speeding disaster response, and delivering planetary health, among others.

In this talk, Andrew will share lessons from the forefront of the New Space renaissance, as well as breakthrough new remote sensing applications being used right now around the world, and describe how new agile manufacturing and development technologies are accelerating the pace of innovation.


April 23, 2019

Bringing Compact High-Field MRI Systems to Life through Novel Methods that Tolerate Extreme Field Inhomogeneity

Jarvis Haupt (Department of Electrical and Computer Engineering, University of Minnesota) and Mike Garwood (CMRR and Department of Radiology, University of Minnesota)

MRI is critically important for understanding the human brain and for diagnosing neurological disease, yet is currently inaccessible to ~90% of the world’s population. One way to increase the accessibility of MRI is to decrease the size of the magnet, making it lighter and transportable, but this comes at the cost of greatly diminished uniformity of the main magnetic field, B0. Thus, the main obstacle to making a small MRI scanner comes down to the question: Is it possible to make high quality MR images with a highly inhomogeneous B0? To accomplish this, the basic approach to performing MRI must change; specifically, how spatial information is encoded, and then, how the resulting MRI signals can be reconstructed into high resolution images. In this talk we will discuss the unique challenges and opportunities that arise within our approach, which uses spatiotemporal, frequency-swept pulses to excite MRI signals over broad frequency ranges and a model-based image reconstruction algorithm that deciphers the spatiotemporal information in the MRI signals.


May 7, 2019

What Lies Beneath: inverse scattering with sparse data (Guzina), and Learning the Sparse Code of Solids with Anomalies: A Model-Agnostic Approach to Wave-Based Diagnostics (Gonella)

Bojan Guzina, Shimizu Professor, and Stefano Gonella, Associate Professor, both at the Department of Civil, Environmental, and Geo-Engineering.

Guzina: Waveform tomography and in particular inverse obstacle scattering are essential to a broad spectrum of scientific and technological disciplines, including sonar and radar imaging, geophysics, oceanography, optics, medical diagnosis, and non-destructive material testing. In general, any relationship between the wavefield scattered by an obstacle and its geometry (or physical characteristics) is nonlinear, which invites two overt solution strategies: (i) linearization via e.g. Born approximation and ray theory, or (ii) pursuit of the nonlinear minimization approach. Over the past two decades, however, a number of sampling methods have emerged that both consider the nonlinear nature of the inverse scattering problem and dispense with iterations. Commonly, these techniques deploy an indicator functional that varies with spatial coordinates of the trial i.e. sampling point, and projects the sensory data (namely observations of the scattered field) onto a functional space reflecting the ‘baseline’ wave motion in a background domain. This indicator functional, designed to reach extreme values when the sampling point strikes the anomaly, can be formulated from either a mathematical or a physical standpoint. An example of the latter methodology is perhaps best exemplified via the topological sensitivity (TS) approach. This talk will cover the idea and experimental validation of the TS methodology in the context of acoustic and  elastic waves, including a recent backing of the approach within the framework of catastrophe theory.

Gonella: In this work we illustrate an approach to structural and materials diagnostics revolving around the mechanistic reinterpretation of concepts and methods originated in the fields of signal and image processing and machine learning. Anomalies and defects manifest in the dynamic response of a solid medium as a collection of salient and spatially localized events, which are reflected in the data structure of the response in the form of a set of behaviorally or topologically sparse features. We introduce a model-agnostic and baseline-free methodology that requires virtually no a priori knowledge of the medium’s material properties and forsakes the need for any knowledge of the system's behavior in its pristine state. This agnostic attribute makes the methodology powerful in dealing with media with heterogeneous or unknown property distribution, for which a material model is unsuitable or unreliable. The method revolves around the construction of sparse representations of the dynamic response, which are obtained by learning instructive dictionaries that form a suitable basis for the response data. The resulting sparse coding problem is recast as a modified dictionary-learning task with additional sparsity constraints enforced on the atoms of the dictionaries, which provides them with a prescribed spatial topology designed to unveil potential anomalous regions in the physical domain. The method is validated using synthetically generated data as well as experimental data acquired using a scanning laser Doppler vibrometer.