Monday, January 22, 2018

Quantized Compressive Sensing with RIP Matrices: The Benefit of Dithering




Quantized Compressive Sensing with RIP Matrices: The Benefit of Dithering by Chunlei Xu, Laurent Jacques

In Compressive Sensing theory and its applications, quantization of signal measurements, as integrated into any realistic sensing model, impacts the quality of signal reconstruction. In fact, there even exist incompatible combinations of quantization functions (e.g., the 1-bit sign function) and sensing matrices (e.g., Bernoulli) that cannot lead to an arbitrarily low reconstruction error when the number of observations increases.
This work shows that, for a scalar and uniform quantization, provided that a uniform random vector, or "random dithering", is added to the compressive measurements of a low-complexity signal (e.g., a sparse or compressible signal, or a low-rank matrix) before quantization, a large class of random matrix constructions known to respect the restricted isometry property (RIP) are made "compatible" with this quantizer. This compatibility is demonstrated by the existence of (at least) one signal reconstruction method, the "projected back projection" (PBP), whose reconstruction error is proved to decay when the number of quantized measurements increases.
Despite the simplicity of PBP, which amounts to projecting the back projection of the compressive observations (obtained from their multiplication by the adjoint sensing matrix) onto the low-complexity set containing the observed signal, we also prove that given a RIP matrix and for a single realization of the dithering, this reconstruction error decay is also achievable uniformly for the sensing of all signals in the considered low-complexity set.
We finally confirm empirically these observations in several sensing contexts involving sparse signals, low-rank matrices, and compressible signals, with various RIP matrix constructions such as sub-Gaussian random matrices and random partial Discrete Cosine Transform (DCT) matrices.









Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Call for participants: Workshop and Advanced school "Statistical physics and machine learning back together" in Cargese, Corsica, France, August 20-31, 2018



Lenka sent me the following the other day:


Dear Colleagues and Friends,

This is the announcement and call for participants of the workshop and advanced school "Statistical physics and machine learning back together" that will take place in Cargese, Corsica, France during August 20-31, 2018. Please forward this to your colleagues/students that may be interested.

Researchers, students and postdocs interested to participate in the event are invited to apply on the website http://cargese.krzakala.org
(or http://www.lps.ens.fr/~krzakala/WEBSITE_Cargese2018/home.htm ) by February 28st, 2018.

The capacity of the Cargese amphitheatre is limited, due to this constraint participants will be selected from the applicants.

The main goal of this event is to gather the community of researchers working on questions that relate in some way statistical physics and high dimensional statistical inference and learning. The format will be several (~10) 3h introductory lectures, and about thrice as many invited talks.

The topics include:
  • Energy/loss landscapes in disordered systems, machine learning and inference problems
  • Computational and statistical thresholds and trade-offs
  • Theory of artificial multilayer neural networks
  • Rigorous approaches to spin glasses and related models of statistical inference
  • Parallels between optimisation algorithms and dynamics in physics
  • Vindicating the replica and cavity method rigorously
  • Current trends in variational Bayes inference 
  • Developments in message passing algorithms 
  • Applications on machine learning in physics
  • Information processing in biological systems 


Lecturers:
  • Gerard Ben Arous (Courant Institute)
  • Giulio Biroli (CEA Saclay, France)
  • Nicolas Brunel (Duke University)
  • Yann LeCun (Courant Institute and Facebook)
  • Michael Jordan (UC Berkeley)
  • Stephane Mallat (ENS et college de France)
  • Andrea Montanari (Stanford)
  • Dmitry Panchenko (University of Toronto, Canada)
  • Sundeep Rangan (New York University)
  • Riccardo Zecchina (Politecnico Turin, Italy)


Speakers:
  • Antonio C Auffinger (Northwestern University)
  • Afonso Bandeira (Courant Institute, NYU)
  • Jean Barbier (Queens Mary, London)
  • Quentin Berthet (Cambridge UK)
  • Jean-Philippe Bouchaud (CFM, Paris)
  • Joan Bruna (Courant Institute, NYU)
  • Patrick Charbonneau (Duke)
  • Amir Dembo (Stanford)
  • Allie Fletcher (UCLA)
  • Silvio Franz (Paris-Orsay)
  • Surya Ganguli (Stanford)
  • Alice Guionnet (ENS Lyon)
  • Aukosh Jagganath (Harvard)
  • Yoshiyuki Kabashima (Tokyo Tech)
  • Christina Lee (MIT)
  • Marc Lelarge (ENS, Paris)
  • Tengyu Ma (Princeton)
  • Marc Mezard (ENS, Paris)
  • Leo Miolane (ENS, Paris)
  • Remi Monasson (ENS, Paris)
  • Cristopher Moore (Santa Fe Institute)
  • Giorgio Parisi (Roma La Sapienza)
  • Will Perkins (Birmingham)
  • Federico Ricci-Tersenghi (Roma La Sapienza)
  • Cindy Rush (Columbia Univ.)
  • Levent Sagun (CEA Saclay)
  • S. S. Schoenholz (Google Brain)
  • Phil Schniter (Ohio State University)
  • David Jason Schwab (Northwestern University)
  • Guilhem Semerjian (ENS, Paris)
  • Alexandre Tkatchenko (University of Luxembourg)
  • Naftali Tishby (Hebrew University)
  • Pierfrancesco Urbani (CNRS, Paris)
  • Francesco Zamponi (ENS, Paris)




With best regards the organizers

Florent Krzakala and Lenka Zdeborova






Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Call for participants: Workshop and Advanced school "Statistical physics and machine learning back together" in Cargese, Corsica, France, August 20-31, 2018

Lenka sent me the following the other day:


Dear Colleagues and Friends,

This is the announcement and call for participants of the workshop and advanced school "Statistical physics and machine learning back together" that will take place in Cargese, Corsica, France during August 20-31, 2018. Please forward this to your colleagues/students that may be interested.

Researchers, students and postdocs interested to participate in the event are invited to apply on the website http://cargese.krzakala.org
(or http://www.lps.ens.fr/~krzakala/WEBSITE_Cargese2018/home.htm ) by February 28st, 2018.

The capacity of the Cargese amphitheatre is limited, due to this constraint participants will be selected from the applicants.

The main goal of this event is to gather the community of researchers working on questions that relate in some way statistical physics and high dimensional statistical inference and learning. The format will be several (~10) 3h introductory lectures, and about thrice as many invited talks.

The topics include:
  • Energy/loss landscapes in disordered systems, machine learning and inference problems
  • Computational and statistical thresholds and trade-offs
  • Theory of artificial multilayer neural networks
  • Rigorous approaches to spin glasses and related models of statistical inference
  • Parallels between optimisation algorithms and dynamics in physics
  • Vindicating the replica and cavity method rigorously
  • Current trends in variational Bayes inference 
  • Developments in message passing algorithms 
  • Applications on machine learning in physics
  • Information processing in biological systems 


Lecturers:
  • Gerard Ben Arous (Courant Institute)
  • Giulio Biroli (CEA Saclay, France)
  • Nicolas Brunel (Duke University)
  • Yann LeCun (Courant Institute and Facebook)
  • Michael Jordan (UC Berkeley)
  • Stephane Mallat (ENS et college de France)
  • Andrea Montanari (Stanford)
  • Dmitry Panchenko (University of Toronto, Canada)
  • Sundeep Rangan (New York University)
  • Riccardo Zecchina (Politecnico Turin, Italy)


Speakers:
  • Antonio C Auffinger (Northwestern University)
  • Afonso Bandeira (Courant Institute, NYU)
  • Jean Barbier (Queens Mary, London)
  • Quentin Berthet (Cambridge UK)
  • Jean-Philippe Bouchaud (CFM, Paris)
  • Joan Bruna (Courant Institute, NYU)
  • Patrick Charbonneau (Duke)
  • Amir Dembo (Stanford)
  • Allie Fletcher (UCLA)
  • Silvio Franz (Paris-Orsay)
  • Surya Ganguli (Stanford)
  • Alice Guionnet (ENS Lyon)
  • Aukosh Jagganath (Harvard)
  • Yoshiyuki Kabashima (Tokyo Tech)
  • Christina Lee (MIT)
  • Marc Lelarge (ENS, Paris)
  • Tengyu Ma (Princeton)
  • Marc Mezard (ENS, Paris)
  • Leo Miolane (ENS, Paris)
  • Remi Monasson (ENS, Paris)
  • Cristopher Moore (Santa Fe Institute)
  • Giorgio Parisi (Roma La Sapienza)
  • Will Perkins (Birmingham)
  • Federico Ricci-Tersenghi (Roma La Sapienza)
  • Cindy Rush (Columbia Univ.)
  • Levent Sagun (CEA Saclay)
  • S. S. Schoenholz (Google Brain)
  • Phil Schniter (Ohio State University)
  • David Jason Schwab (Northwestern University)
  • Guilhem Semerjian (ENS, Paris)
  • Alexandre Tkatchenko (University of Luxembourg)
  • Naftali Tishby (Hebrew University)
  • Pierfrancesco Urbani (CNRS, Paris)
  • Francesco Zamponi (ENS, Paris)




With best regards the organizers

Florent Krzakala and Lenka Zdeborova






Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, January 19, 2018

On Random Weights for Texture Generation in One Layer Neural Networks

Continuing up on the use of random projections (which in the context of DNNs is really about NN with random weights), today we have:



Recent work in the literature has shown experimentally that one can use the lower layers of a trained convolutional neural network (CNN) to model natural textures. More interestingly, it has also been experimentally shown that only one layer with random filters can also model textures although with less variability. In this paper we ask the question as to why one layer CNNs with random filters are so effective in generating textures? We theoretically show that one layer convolutional architectures (without a non-linearity) paired with the an energy function used in previous literature, can in fact preserve and modulate frequency coefficients in a manner so that random weights and pretrained weights will generate the same type of images. Based on the results of this analysis we question whether similar properties hold in the case where one uses one convolution layer with a non-linearity. We show that in the case of ReLu non-linearity there are situations where only one input will give the minimum possible energy whereas in the case of no nonlinearity, there are always infinite solutions that will give the minimum possible energy. Thus we can show that in certain situations adding a ReLu non-linearity generates less variable images.






Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, January 18, 2018

Towards Understanding the Invertibility of Convolutional Neural Networks

Ok, here is a "connection to a particular model of model-based compressive sensing (and its recovery algorithms) and random-weight CNNs". This is great! I would have expected to see the LISTA paper from Gregor and Lecun in there somewhere. Irrespective, this type of analysis brings us closer to figuring out the sort of layer that keeps or doesn't keep information (see Sunday Morning Insight: Sharp Phase Transitions in Machine Learning ? ). Enjoy !


Several recent works have empirically observed that Convolutional Neural Nets (CNNs) are (approximately) invertible. To understand this approximate invertibility phenomenon and how to leverage it more effectively, we focus on a theoretical explanation and develop a mathematical model of sparse signal recovery that is consistent with CNNs with random weights. We give an exact connection to a particular model of model-based compressive sensing (and its recovery algorithms) and random-weight CNNs. We show empirically that several learned networks are consistent with our mathematical analysis and then demonstrate that with such a simple theoretical framework, we can obtain reasonable re- construction results on real images. We also discuss gaps between our model assumptions and the CNN trained for classification in practical scenarios.




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, January 17, 2018

Deep Complex Networks - implementation -

Here is some interesting work on complex DNNs. 




At present, the vast majority of building blocks, techniques, and architectures for deep learning are based on real-valued operations and representations. However, recent work on recurrent neural networks and older fundamental theoretical analysis suggests that complex numbers could have a richer representational capacity and could also facilitate noise-robust memory retrieval mechanisms. Despite their attractive properties and potential for opening up entirely new neural architectures, complex-valued deep neural networks have been marginalized due to the absence of the building blocks required to design such models. In this work, we provide the key atomic components for complex-valued deep neural networks and apply them to convolutional feed-forward networks and convolutional LSTMs. More precisely, we rely on complex convolutions and present algorithms for complex batch-normalization, complex weight initialization strategies for complex-valued neural nets and we use them in experiments with end-to-end training schemes. We demonstrate that such complex-valued models are competitive with their real-valued counterparts. We test deep complex models on several computer vision tasks, on music transcription using the MusicNet dataset and on Speech Spectrum Prediction using the TIMIT dataset. We achieve state-of-the-art performance on these audio-related tasks.
Implementation is here: https://github.com/ChihebTrabelsi/deep_complex_networks

The reviews are here. Also, Carlos Perez had a blog post on the matter a while ago: Should Deep Learning use Complex Numbers?




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, January 16, 2018

Understanding Deep Representations through Random Weights




When it comes to random projections and Deep Neural Networks, the paper following paper is intriguing:
In summary, applying random weights in the whole CNNDeCNN architecture, we can still capture the geometric positions and contours of the image. The shape reduction of feature maps takes responsibility for the randomness on the reconstructed images for higher layer representation due to the representation compression. And random weight DeCNN can reconstruct robust images if we have enough number of feature maps



We systematically study the deep representation of random weight CNN (convolutional neural network) using the DeCNN (deconvolutional neural network) architecture. We first fix the weights of an untrained CNN, and for each layer of its feature representation, we train a corresponding DeCNN to reconstruct the input image. As compared with the pre-trained CNN, the DeCNN trained on a random weight CNN can reconstruct images more quickly and accurately, no matter which type of random distribution for the CNN's weights. It reveals that every layer of the random CNN can retain photographically accurate information about the image. We then let the DeCNN be untrained, i.e. the overall CNN-DeCNN architecture uses only random weights. Strikingly, we can reconstruct all position information of the image for low layer representations but the colors change. For high layer representations, we can still capture the rough contours of the image. We also change the number of feature maps and the shape of the feature maps and gain more insight on the random function of the CNN-DeCNN structure. Our work reveals that the purely random CNN-DeCNN architecture substantially contributes to the geometric and photometric invariance due to the intrinsic symmetry and invertible structure, but it discards the colormetric information due to the random projection.




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, January 15, 2018

The stochastic interpolation method. The layered Structure of Tensor Estimation and Phase Transitions, Optimal Errors and Optimality of Message-Passing in GLMs



Jean just sent me the following:
Dear Igor,

We have of bunch of recent rigorous results that might be of interest for the community. In order to obtain them, I developed together with Nicolas Macris a new adaptive interpolation method well designed for treating high-dimensironl Bayesian inference problems.

_In this article we present our method with application to random linear estimation/compressive sensing as well as to symmetric low-rank matrix factorization and tensor factorization.

https://arxiv.org/pdf/1705.02780.pdf

_In this one, presented at allerton this year, we present a nice application of the method to the non-symmetric tensor factorization problem that was resisting until now. Moreover we exploit the structure in « layers » of the model, which might be an idea of independent interest.

https://arxiv.org/pdf/1709.10368.pdf

_Finally, our main recent result is the application of the method for proving the statistical physics conjecture for the single-letter « replica formula » for the mutual information of generalized linear models. There we also rigorously derive the inference and generalization errors of a large class of single layer neural networks such as the perceptron.

https://arxiv.org/pdf/1708.03395.pdf

All the best

Thanks Jean, here are the papers you mentioned. I had mentioned one before and I like the layer approach of the second paper !




In recent years important progress has been achieved towards proving the validity of the replica predictions for the (asymptotic) mutual information (or "free energy") in Bayesian inference problems. The proof techniques that have emerged appear to be quite general, despite they have been worked out on a case-by-case basis. Unfortunately, a common point between all these schemes is their relatively high level of technicality. We present a new proof scheme that is quite straightforward with respect to the previous ones. We call it the stochastic interpolation method because it can be seen as an extension of the interpolation method developped by Guerra and Toninelli in the context of spin glasses, with a trial "parameter" which becomes a stochastic process. In order to illustrate our method we show how to prove the replica formula for three non-trivial inference problems. The first one is symmetric rank-one matrix estimation (or factorisation), which is the simplest problem considered here and the one for which the method is presented in full details. Then we generalize to symmetric tensor estimation and random linear estimation. In addition, we show how to prove a tight lower bound for the mutual information of non-symmetric tensor estimation. We believe that the present method has a much wider range of applicability and also sheds new insights on the reasons for the validity of replica formulas in Bayesian inference.

We consider rank-one non-symmetric tensor esti- mation and derive simple formulas for the mutual information. We start by the order 2 problem, namely matrix factorization. We treat it completely in a simpler fashion than previous proofs using a new type of interpolation method developed in [1]. We then show how to harness the structure in "layers" of tensor estimation in order to obtain a formula for the mutual information for the order 3 problem from the knowledge of the formula for the order 2 problem, still using the same kind of interpolation. Our proof technique straightforwardly general- izes and allows to rigorously obtain the mutual information at any order in a recursive way.



We consider generalized linear models (GLMs) where an unknown n-dimensional signal vector is observed through the application of a random matrix and a non-linear (possibly probabilistic) componentwise output function. We consider the models in the high-dimensional limit, where the observation consists of m points, and m/n→α where α stays finite in the limit m,n→∞. This situation is ubiquitous in applications ranging from supervised machine learning to signal processing. A substantial amount of theoretical work analyzed the model-case when the observation matrix has i.i.d. elements and the components of the ground-truth signal are taken independently from some known distribution. While statistical physics provided number of explicit conjectures for special cases of this model, results existing for non-linear output functions were so far non-rigorous. At the same time GLMs with non-linear output functions are used as a basic building block of powerful multilayer feedforward neural networks. Therefore rigorously establishing the formulas conjectured for the mutual information is a key open problem that we solve in this paper. We also provide an explicit asymptotic formula for the optimal generalization error, and confirm the prediction of phase transitions in GLMs. Analyzing the resulting formulas for several non-linear output functions, including the rectified linear unit or modulus functions, we obtain quantitative descriptions of information-theoretic limitations of high-dimensional inference. Our proof technique relies on a new version of the interpolation method with an adaptive interpolation path and is of independent interest. Furthermore we show that a polynomial-time algorithm referred to as generalized approximate message-passing reaches the optimal generalization error for a large set of parameters.









Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, January 12, 2018

OpenMined Hackathon in Paris (Saturday, January 13th)



I heard about the OpenMined project during the Paris Machine Learning meetup that we organized back in December from a presentation by Morten DahlOpenMined is a community focused on building open-source technology for the decentralized ownership of data and intelligence.


"It is commonly believed that individuals must provide a copy of their personal information in order for AI to train or predict over it. This belief creates a tension between developers and consumers. Developers want the ability to create innovative products and services, while consumers want to avoid sending developers a copy of their data.

With OpenMined, AI can be trained on data that it never has access to.
The mission of the OpenMined community is to make privacy-preserving deep learning technology accessible to consumers, who supply data, and machine learning practitioners, who train models on that data. Given recent developments in cryptography, AI-based products and services do not need a copy of a dataset in order to create value from it."


The community is organizing a hackathon 

HACKATHON
On Saturday, January 13th, the OpenMined community will be gathering in-person in over 20 cities around the world to collaborate on various coding projects and challenges. We’ll have a worldwide video hangout for all who cannot make it to a physical location. The hackathon will include three coding projects, each with a live tutorial from a member of the OpenMined community.
Here are the general details:

OpenMined Hackathon DetailsDate: January 13, 2018

On Saturday, January 13th, the OpenMined community will be gathering in-person in over 20 cities around the world to collaborate on various coding projects and challenges. We’ll have a world-wide video hangout for all who cannot make it to a physical location. The hackathon will include three coding projects, each with a live tutorial from a member of the OpenMined community. While hackathons will start at the discretion of each city’s organizer (slack them for details), code tutorials will be live broadcasted at 3 different times: 12:00 noon London time, 12:00 noon Eastern time, and 12:00 noon Pacific Time.



Coding Projects
Beginner: Build a Neural Network in OpenMinedPresentation: How to use the OpenMined Keras InterfaceProject: Find a new dataset and train a new neural network using the Keras interface!
Intermediate: Building the Guts of a Deep Learning FrameworkPresentation: How OpenMined Tensors Work - The Magic Under the HoodProject: Add a feature to Float Tensors
Advanced: Performance Improvements - GPUs and NetworkingPresentation: Optimizing the key bottlenecks of the systemProject: The Need for Speed - Picking a Neural Network and Making it Faster



Physical Locations
Participants in this hackathon will meet in person at the following locations.  If your city says “venue tbd”, reach out to the Slack Point of Contact for specific details and directions. Starbucks is the suggested backup venue of choice - usually has fast wifi and big tables available. (If you aren’t on our Slack, click here for an invite  
Before you come...
you need to do the following 
Join our Slack  
- and join the #hackathon channel 
Reach out to your city’s organizer on Slack! 
Download and Install Unity  
- https://unity3d.com/ 
Follow the Readme to setup OpenMined and PySyft available here: https://github.com/OpenMined/PySyft & https://github.com/OpenMined/OpenMined



OpenMined is a community focused on building technology for the decentralized ownership of data and intelligence.Join our Slack channel to get involved at https://openmined.org/Follow: https://twitter.com/openminedorgContribute: https://github.com/OpenMined/OpenMined







In the OpenMined Slack, there is a #paris channel, the hackaton will be hosted at La Paillasse thanks to support from LightOn. You can find all the details in the #paris channel on the OpenMined Slack.




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

Random Incident Sound Waves for Fast Compressed Pulse-Echo Ultrasound Imaging


Martin just sent me the following

Dear Igor, 
I recently discovered your post about the paper "Compressive 3D ultrasound imaging using a single sensor" (http://advances.sciencemag.org/content/3/12/e1701423) by Kruizinga et al. (https://nuit-blanche.blogspot.de/2017/12/compressive-3d-ultrasound-imaging-using.html) and read it with great interest. Thank you very much for highlighting this important contribution! I have been working on the incorporation of CS into ultrasound imaging for several years now (https://scholar.google.de/citations?user=X35rUbAAAAAJ&hl=de) and independently discovered a very similar method for high-frame rate ultrasound imaging (UI). Instead of minimizing the number of sensors, this method aims at minimizing the number of sequential pulse-echo measurements per image. It emits three types of random ultrasonic waves to reduce the coherence of the sound waves scattered by distinct basis functions, e.g. point-like scatterers or Fourier basis functions. The synthesis of these waves exploits the degrees of freedom provided by modern UI systems combined with planar transducer arrays. Specifically, it leverages random time delays, random apodization weights, and combinations thereof. (In essence, the method electronically realizes the fixed coding mask used by Kruizinga et al. as one type of random incident ultrasonic wave.) arXiv.org provides a preprint of this work: https://arxiv.org/abs/1801.00205
I hope that my method appeals to you and the readers of your blog. It would also mean a lot to me, if you mentioned this work on occasion.
Happy new year and keep up your good work


Thank you Martin ! Here is the paper:



A novel method for the fast acquisition and the recovery of ultrasound images disrupts the tradeoff between the image acquisition rate and the image quality. It recovers the spatial compressibility fluctuations in weakly-scattering soft tissue structures from only a few sequential pulse-echo measurements of the scattered sound field. The underlying linear inverse scattering problem uses a realistic d-dimensional physical model for the pulse-echo measurement process, accounting for diffraction, the combined effects of power-law absorption and dispersion, and the specifications of a planar transducer array. Postulating the existence of a nearly-sparse representation of the spatial compressibility fluctuations in a suitable orthonormal basis, the compressed sensing framework ensures its stable recovery by a sparsity-promoting ℓq-minimization method, if the pulse-echo measurements of the individual basis functions are sufficiently incoherent. The novel method meets this condition by leveraging the degrees of freedom in the syntheses of the incident ultrasonic waves. It emits three types of random ultrasonic waves that outperform the widely-used steered quasi-plane waves (QPWs). Their synthesis applies random time delays, apodization weights, or combinations thereof to the voltage signals exciting the individual elements of the planar transducer array.


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

Printfriendly