From Wikipedia, the free encyclopedia
(Redirected from Energy based model)

An energy-based model (EBM) (also called a Canonical Ensemble Learning(CEL) or Learning via Canonical Ensemble (LCE)) is an application of canonical ensemble formulation of statistical physics for learning from data problems. The approach prominently appears in generative models (GMs).

EBMs provide a unified framework for many probabilistic and non-probabilistic approaches to such learning, particularly for training graphical and other structured models. [1]

An EBM learns the characteristics of a target dataset and generates a similar but larger dataset. EBMs detect the latent variables of a dataset and generate new datasets with a similar distribution. [1]

Energy-based generative neural networks [2] [3] is a class of generative models, which aim to learn explicit probability distributions of data in the form of energy-based models whose energy functions are parameterized by modern deep neural networks.

Boltzmann machines are a special form of energy-based models with a specific parametrization of the energy. [4]

Description

For a given input , the model describes an energy such that the Boltzmann distribution is a probability (density) and typically .

Since the normalization constant , also known as partition function, depends on all the Boltzmann factors of all possible inputs it cannot be easily computed or reliably estimated during training simply using standard maximum likelihood estimation.

However for maximizing the likelihood during training, the gradient of the log likelihood of a single training example is given by using the chain rule

The expectation in the above formula for the gradient can be approximately estimated by drawing samples from the distribution using Markov chain Monte Carlo (MCMC) [5]

Early energy-based models like the 2003 Boltzmann machine by Hinton estimated this expectation using block Gibbs sampler. Newer approaches make use of more efficient Stochastic Gradient Langevin Dynamics (LD) drawing samples using: [6]

and . A replay buffer of past values is used with LD to initialize the optimization module. [1]

The parameters of the neural network are, therefore, trained in a generative manner by MCMC-based maximum likelihood estimation: [7] The learning process follows an "analysis by synthesis" scheme, where within each learning iteration, the algorithm samples the synthesized examples from the current model by a gradient-based MCMC method, e.g., Langevin dynamics or Hybrid Monte Carlo, and then updates the model parameters based on the difference between the training examples and the synthesized ones, see equation . This process can be interpreted as an alternating mode seeking and mode shifting process, and also has an adversarial interpretation. [8] [9]

In the end, the model learns a function that associates low energies to correct values, and higher energies to incorrect values. [1]

After training, given a converged energy model , the Metropolis–Hastings algorithm can be used to draw new samples. The acceptance probability is given by:

History

The term "energy-based models" was first coined in a 2003 JMLR paper [10] where the authors defined a generalisation of independent components analysis to the overcomplete setting using EBMs. Other early work on EBMs proposed models that represented energy as a composition of latent and observable variables.

Characteristics

EBMs demonstrate useful properties: [1]

  • Simplicity and stability–The EBM is the only object that needs to be designed and trained. Separate networks need not be trained to ensure balance.
  • Adaptive computation time–An EBM can generate sharp, diverse samples or (more quickly) coarse, less diverse samples. Given infinite time, this procedure produces true samples. [8]
  • Flexibility–In Variational Autoencoders (VAE) and flow-based models, the generator learns a map from a continuous space to a (possibly) discontinuous space containing different data modes. EBMs can learn to assign low energies to disjoint regions (multiple modes).
  • Adaptive generation–EBM generators are implicitly defined by the probability distribution, and automatically adapt as the distribution changes (without training), allowing EBMs to address domains where generator training is impractical, as well as minimizing mode collapse and avoiding spurious modes from out-of-distribution samples. [5]
  • Compositionality–Individual models are unnormalized probability distributions, allowing models to be combined through product of experts or other hierarchical techniques.

Experimental results

On image datasets such as CIFAR-10 and ImageNet 32x32, an EBM model generated high-quality images relatively quickly. It supported combining features learned from one type of image for generating other types of images. It was able to generalize using out-of-distribution datasets, outperforming flow-based and autoregressive models. EBM was relatively resistant to adversarial perturbations, behaving better than models explicitly trained against them with training for classification. [1]

Applications

Target applications include natural language processing, robotics and computer vision. [1]

The first energy-based generative neural network is the generative ConvNet [1] proposed in 2016 for image patterns, where the neural network is a convolutional neural network. [11] [12] The model has been generalized to various domains to learn distributions of videos, [8] [3] and 3D voxels. [13] They are made more effective in their variants. [14] [15] [16] [17] [18] [19] They have proven useful for data generation (e.g., image synthesis, [1] video synthesis, [8] 3D shape synthesis, [5] etc.), data recovery (e.g., recovering videos with missing pixels or image frames, [8] 3D super-resolution, [5] etc), data reconstruction (e.g., image reconstruction and linear interpolation [15]).

Alternatives

EBMs compete with techniques such as variational autoencoders (VAEs), generative adversarial networks (GANs) or normalizing flows. [1]

Extensions

Joint energy-based models

A classifier can be reinterpreted as joint energy-based model

Joint energy-based models (JEM), proposed in 2020 by Grathwohl et al., allow any classifier with softmax output to be interpreted as energy-based model. The key observation is that such a classifier is trained to predict the conditional probability where is the y-th index of the logits corresponding to class y. Without any change to the logits it was proposed to reinterpret the logits to describe a joint probability density:

with unknown partition function and energy . By marginalization, we obtain the unnormalized density

therefore,

so that any classifier can be used to define an energy function .

See also

Literature

  • Implicit Generation and Generalization in Energy-Based Models Yilun Du, Igor Mordatch https://arxiv.org/abs/1903.08689
  • Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One, Will Grathwohl, Kuan-Chieh Wang, Jörn-Henrik Jacobsen, David Duvenaud, Mohammad Norouzi, Kevin Swersky https://arxiv.org/abs/1912.03263

References

  1. ^ a b c d e f g h i j Rodriguez, Jesus (2019-04-01). "Generating Training Datasets Using Energy Based Models that Actually Scale". Medium. Archived from the original on 2019-04-01. Retrieved 2019-12-27.
  2. ^ Xie, Jianwen; Lu, Yang; Zhu, Song-Chun; Wu, Ying Nian (2016). "A theory of generative ConvNet". ICML. arXiv: 1602.03264. Bibcode: 2016arXiv160203264X.
  3. ^ a b Xie, Jianwen; Zhu, Song-Chun; Wu, Ying Nian (2019). "Learning Energy-based Spatial-Temporal Generative ConvNets for Dynamic Patterns". IEEE Transactions on Pattern Analysis and Machine Intelligence. 43 (2): 516–531. arXiv: 1909.11975. Bibcode: 2019arXiv190911975X. doi: 10.1109/tpami.2019.2934852. ISSN  0162-8828. PMID  31425020. S2CID  201098397.
  4. ^ Learning Deep Architectures for AI, Yoshua Bengio, Page 54, https://books.google.de/books?id=cq5ewg7FniMC&pg=PA54
  5. ^ a b c d Du, Yilun; Mordatch, Igor (2019-03-20). "Implicit Generation and Generalization in Energy-Based Models". arXiv: 1903.08689 [ cs.LG].
  6. ^ Grathwohl, Will, et al. "Your classifier is secretly an energy based model and you should treat it like one." arXiv preprint arXiv:1912.03263 (2019).
  7. ^ Barbu, Adrian; Zhu, Song-Chun (2020). Monte Carlo Methods. Springer.
  8. ^ a b c d e Xie, Jianwen; Zhu, Song-Chun; Wu, Ying Nian (July 2017). "Synthesizing Dynamic Patterns by Spatial-Temporal Generative ConvNet". 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. pp. 1061–1069. arXiv: 1606.00972. doi: 10.1109/cvpr.2017.119. ISBN  978-1-5386-0457-1. S2CID  763074.
  9. ^ Wu, Ying Nian; Xie, Jianwen; Lu, Yang; Zhu, Song-Chun (2018). "Sparse and deep generalizations of the FRAME model". Annals of Mathematical Sciences and Applications. 3 (1): 211–254. doi: 10.4310/amsa.2018.v3.n1.a7. ISSN  2380-288X.
  10. ^ Teh, Yee Whye; Welling, Max; Osindero, Simon; Hinton, Geoffrey E. (December 2003). "Energy-Based Models for Sparse Overcomplete Representations". JMLR.
  11. ^ Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. (1998). "Gradient-based learning applied to document recognition". Proceedings of the IEEE. 86 (11): 2278–2324. doi: 10.1109/5.726791. ISSN  0018-9219. S2CID  14542261.
  12. ^ Krizhevsky, Alex; Sutskever, Ilya; Hinton, Geoffrey (2012). "ImageNet classification with deep convolutional neural networks" (PDF). NIPS.
  13. ^ Xie, Jianwen; Zheng, Zilong; Gao, Ruiqi; Wang, Wenguan; Zhu, Song-Chun; Wu, Ying Nian (June 2018). "Learning Descriptor Networks for 3D Shape Synthesis and Analysis". 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE. pp. 8629–8638. arXiv: 1804.00586. Bibcode: 2018arXiv180400586X. doi: 10.1109/cvpr.2018.00900. ISBN  978-1-5386-6420-9. S2CID  4564025.
  14. ^ Gao, Ruiqi; Lu, Yang; Zhou, Junpei; Zhu, Song-Chun; Wu, Ying Nian (June 2018). "Learning Generative ConvNets via Multi-grid Modeling and Sampling". 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE. pp. 9155–9164. arXiv: 1709.08868. doi: 10.1109/cvpr.2018.00954. ISBN  978-1-5386-6420-9. S2CID  4566195.
  15. ^ a b Nijkamp, Zhu, Song-Chun Wu, Ying Nian, Erik; Hill, Mitch; Zhu, Song-Chun; Wu, Ying Nian (2019). On Learning Non-Convergent Non-Persistent Short-Run MCMC Toward Energy-Based Model. NeurIPS. OCLC  1106340764.{{ cite book}}: CS1 maint: location missing publisher ( link) CS1 maint: multiple names: authors list ( link)
  16. ^ Cai, Xu; Wu, Yang; Li, Guanbin; Chen, Ziliang; Lin, Liang (2019-07-17). "FRAME Revisited: An Interpretation View Based on Particle Evolution". Proceedings of the AAAI Conference on Artificial Intelligence. 33: 3256–3263. arXiv: 1812.01186. doi: 10.1609/aaai.v33i01.33013256. ISSN  2374-3468.
  17. ^ Xie, Jianwen; Lu, Yang; Gao, Ruiqi; Zhu, Song-Chun; Wu, Ying Nian (2020-01-01). "Cooperative Training of Descriptor and Generator Networks". IEEE Transactions on Pattern Analysis and Machine Intelligence. 42 (1): 27–45. arXiv: 1609.09408. doi: 10.1109/tpami.2018.2879081. ISSN  0162-8828. PMID  30387724. S2CID  7759006.
  18. ^ Xie, Jianwen; Lu, Yang; Gao, Ruiqi; Gao, Song-Chun (2018). "Cooperative Learning of Energy-Based Model and Latent Variable Model via MCMC Teaching". Thirty-Second AAAI Conference on Artificial Intelligence. 32. doi: 10.1609/aaai.v32i1.11834. S2CID  9212174.
  19. ^ Han, Tian; Nijkamp, Erik; Fang, Xiaolin; Hill, Mitch; Zhu, Song-Chun; Wu, Ying Nian (June 2019). "Divergence Triangle for Joint Training of Generator Model, Energy-Based Model, and Inferential Model". 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. pp. 8662–8671. doi: 10.1109/cvpr.2019.00887. ISBN  978-1-7281-3293-8. S2CID  57189202.

External links

From Wikipedia, the free encyclopedia
(Redirected from Energy based model)

An energy-based model (EBM) (also called a Canonical Ensemble Learning(CEL) or Learning via Canonical Ensemble (LCE)) is an application of canonical ensemble formulation of statistical physics for learning from data problems. The approach prominently appears in generative models (GMs).

EBMs provide a unified framework for many probabilistic and non-probabilistic approaches to such learning, particularly for training graphical and other structured models. [1]

An EBM learns the characteristics of a target dataset and generates a similar but larger dataset. EBMs detect the latent variables of a dataset and generate new datasets with a similar distribution. [1]

Energy-based generative neural networks [2] [3] is a class of generative models, which aim to learn explicit probability distributions of data in the form of energy-based models whose energy functions are parameterized by modern deep neural networks.

Boltzmann machines are a special form of energy-based models with a specific parametrization of the energy. [4]

Description

For a given input , the model describes an energy such that the Boltzmann distribution is a probability (density) and typically .

Since the normalization constant , also known as partition function, depends on all the Boltzmann factors of all possible inputs it cannot be easily computed or reliably estimated during training simply using standard maximum likelihood estimation.

However for maximizing the likelihood during training, the gradient of the log likelihood of a single training example is given by using the chain rule

The expectation in the above formula for the gradient can be approximately estimated by drawing samples from the distribution using Markov chain Monte Carlo (MCMC) [5]

Early energy-based models like the 2003 Boltzmann machine by Hinton estimated this expectation using block Gibbs sampler. Newer approaches make use of more efficient Stochastic Gradient Langevin Dynamics (LD) drawing samples using: [6]

and . A replay buffer of past values is used with LD to initialize the optimization module. [1]

The parameters of the neural network are, therefore, trained in a generative manner by MCMC-based maximum likelihood estimation: [7] The learning process follows an "analysis by synthesis" scheme, where within each learning iteration, the algorithm samples the synthesized examples from the current model by a gradient-based MCMC method, e.g., Langevin dynamics or Hybrid Monte Carlo, and then updates the model parameters based on the difference between the training examples and the synthesized ones, see equation . This process can be interpreted as an alternating mode seeking and mode shifting process, and also has an adversarial interpretation. [8] [9]

In the end, the model learns a function that associates low energies to correct values, and higher energies to incorrect values. [1]

After training, given a converged energy model , the Metropolis–Hastings algorithm can be used to draw new samples. The acceptance probability is given by:

History

The term "energy-based models" was first coined in a 2003 JMLR paper [10] where the authors defined a generalisation of independent components analysis to the overcomplete setting using EBMs. Other early work on EBMs proposed models that represented energy as a composition of latent and observable variables.

Characteristics

EBMs demonstrate useful properties: [1]

  • Simplicity and stability–The EBM is the only object that needs to be designed and trained. Separate networks need not be trained to ensure balance.
  • Adaptive computation time–An EBM can generate sharp, diverse samples or (more quickly) coarse, less diverse samples. Given infinite time, this procedure produces true samples. [8]
  • Flexibility–In Variational Autoencoders (VAE) and flow-based models, the generator learns a map from a continuous space to a (possibly) discontinuous space containing different data modes. EBMs can learn to assign low energies to disjoint regions (multiple modes).
  • Adaptive generation–EBM generators are implicitly defined by the probability distribution, and automatically adapt as the distribution changes (without training), allowing EBMs to address domains where generator training is impractical, as well as minimizing mode collapse and avoiding spurious modes from out-of-distribution samples. [5]
  • Compositionality–Individual models are unnormalized probability distributions, allowing models to be combined through product of experts or other hierarchical techniques.

Experimental results

On image datasets such as CIFAR-10 and ImageNet 32x32, an EBM model generated high-quality images relatively quickly. It supported combining features learned from one type of image for generating other types of images. It was able to generalize using out-of-distribution datasets, outperforming flow-based and autoregressive models. EBM was relatively resistant to adversarial perturbations, behaving better than models explicitly trained against them with training for classification. [1]

Applications

Target applications include natural language processing, robotics and computer vision. [1]

The first energy-based generative neural network is the generative ConvNet [1] proposed in 2016 for image patterns, where the neural network is a convolutional neural network. [11] [12] The model has been generalized to various domains to learn distributions of videos, [8] [3] and 3D voxels. [13] They are made more effective in their variants. [14] [15] [16] [17] [18] [19] They have proven useful for data generation (e.g., image synthesis, [1] video synthesis, [8] 3D shape synthesis, [5] etc.), data recovery (e.g., recovering videos with missing pixels or image frames, [8] 3D super-resolution, [5] etc), data reconstruction (e.g., image reconstruction and linear interpolation [15]).

Alternatives

EBMs compete with techniques such as variational autoencoders (VAEs), generative adversarial networks (GANs) or normalizing flows. [1]

Extensions

Joint energy-based models

A classifier can be reinterpreted as joint energy-based model

Joint energy-based models (JEM), proposed in 2020 by Grathwohl et al., allow any classifier with softmax output to be interpreted as energy-based model. The key observation is that such a classifier is trained to predict the conditional probability where is the y-th index of the logits corresponding to class y. Without any change to the logits it was proposed to reinterpret the logits to describe a joint probability density:

with unknown partition function and energy . By marginalization, we obtain the unnormalized density

therefore,

so that any classifier can be used to define an energy function .

See also

Literature

  • Implicit Generation and Generalization in Energy-Based Models Yilun Du, Igor Mordatch https://arxiv.org/abs/1903.08689
  • Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One, Will Grathwohl, Kuan-Chieh Wang, Jörn-Henrik Jacobsen, David Duvenaud, Mohammad Norouzi, Kevin Swersky https://arxiv.org/abs/1912.03263

References

  1. ^ a b c d e f g h i j Rodriguez, Jesus (2019-04-01). "Generating Training Datasets Using Energy Based Models that Actually Scale". Medium. Archived from the original on 2019-04-01. Retrieved 2019-12-27.
  2. ^ Xie, Jianwen; Lu, Yang; Zhu, Song-Chun; Wu, Ying Nian (2016). "A theory of generative ConvNet". ICML. arXiv: 1602.03264. Bibcode: 2016arXiv160203264X.
  3. ^ a b Xie, Jianwen; Zhu, Song-Chun; Wu, Ying Nian (2019). "Learning Energy-based Spatial-Temporal Generative ConvNets for Dynamic Patterns". IEEE Transactions on Pattern Analysis and Machine Intelligence. 43 (2): 516–531. arXiv: 1909.11975. Bibcode: 2019arXiv190911975X. doi: 10.1109/tpami.2019.2934852. ISSN  0162-8828. PMID  31425020. S2CID  201098397.
  4. ^ Learning Deep Architectures for AI, Yoshua Bengio, Page 54, https://books.google.de/books?id=cq5ewg7FniMC&pg=PA54
  5. ^ a b c d Du, Yilun; Mordatch, Igor (2019-03-20). "Implicit Generation and Generalization in Energy-Based Models". arXiv: 1903.08689 [ cs.LG].
  6. ^ Grathwohl, Will, et al. "Your classifier is secretly an energy based model and you should treat it like one." arXiv preprint arXiv:1912.03263 (2019).
  7. ^ Barbu, Adrian; Zhu, Song-Chun (2020). Monte Carlo Methods. Springer.
  8. ^ a b c d e Xie, Jianwen; Zhu, Song-Chun; Wu, Ying Nian (July 2017). "Synthesizing Dynamic Patterns by Spatial-Temporal Generative ConvNet". 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. pp. 1061–1069. arXiv: 1606.00972. doi: 10.1109/cvpr.2017.119. ISBN  978-1-5386-0457-1. S2CID  763074.
  9. ^ Wu, Ying Nian; Xie, Jianwen; Lu, Yang; Zhu, Song-Chun (2018). "Sparse and deep generalizations of the FRAME model". Annals of Mathematical Sciences and Applications. 3 (1): 211–254. doi: 10.4310/amsa.2018.v3.n1.a7. ISSN  2380-288X.
  10. ^ Teh, Yee Whye; Welling, Max; Osindero, Simon; Hinton, Geoffrey E. (December 2003). "Energy-Based Models for Sparse Overcomplete Representations". JMLR.
  11. ^ Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. (1998). "Gradient-based learning applied to document recognition". Proceedings of the IEEE. 86 (11): 2278–2324. doi: 10.1109/5.726791. ISSN  0018-9219. S2CID  14542261.
  12. ^ Krizhevsky, Alex; Sutskever, Ilya; Hinton, Geoffrey (2012). "ImageNet classification with deep convolutional neural networks" (PDF). NIPS.
  13. ^ Xie, Jianwen; Zheng, Zilong; Gao, Ruiqi; Wang, Wenguan; Zhu, Song-Chun; Wu, Ying Nian (June 2018). "Learning Descriptor Networks for 3D Shape Synthesis and Analysis". 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE. pp. 8629–8638. arXiv: 1804.00586. Bibcode: 2018arXiv180400586X. doi: 10.1109/cvpr.2018.00900. ISBN  978-1-5386-6420-9. S2CID  4564025.
  14. ^ Gao, Ruiqi; Lu, Yang; Zhou, Junpei; Zhu, Song-Chun; Wu, Ying Nian (June 2018). "Learning Generative ConvNets via Multi-grid Modeling and Sampling". 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE. pp. 9155–9164. arXiv: 1709.08868. doi: 10.1109/cvpr.2018.00954. ISBN  978-1-5386-6420-9. S2CID  4566195.
  15. ^ a b Nijkamp, Zhu, Song-Chun Wu, Ying Nian, Erik; Hill, Mitch; Zhu, Song-Chun; Wu, Ying Nian (2019). On Learning Non-Convergent Non-Persistent Short-Run MCMC Toward Energy-Based Model. NeurIPS. OCLC  1106340764.{{ cite book}}: CS1 maint: location missing publisher ( link) CS1 maint: multiple names: authors list ( link)
  16. ^ Cai, Xu; Wu, Yang; Li, Guanbin; Chen, Ziliang; Lin, Liang (2019-07-17). "FRAME Revisited: An Interpretation View Based on Particle Evolution". Proceedings of the AAAI Conference on Artificial Intelligence. 33: 3256–3263. arXiv: 1812.01186. doi: 10.1609/aaai.v33i01.33013256. ISSN  2374-3468.
  17. ^ Xie, Jianwen; Lu, Yang; Gao, Ruiqi; Zhu, Song-Chun; Wu, Ying Nian (2020-01-01). "Cooperative Training of Descriptor and Generator Networks". IEEE Transactions on Pattern Analysis and Machine Intelligence. 42 (1): 27–45. arXiv: 1609.09408. doi: 10.1109/tpami.2018.2879081. ISSN  0162-8828. PMID  30387724. S2CID  7759006.
  18. ^ Xie, Jianwen; Lu, Yang; Gao, Ruiqi; Gao, Song-Chun (2018). "Cooperative Learning of Energy-Based Model and Latent Variable Model via MCMC Teaching". Thirty-Second AAAI Conference on Artificial Intelligence. 32. doi: 10.1609/aaai.v32i1.11834. S2CID  9212174.
  19. ^ Han, Tian; Nijkamp, Erik; Fang, Xiaolin; Hill, Mitch; Zhu, Song-Chun; Wu, Ying Nian (June 2019). "Divergence Triangle for Joint Training of Generator Model, Energy-Based Model, and Inferential Model". 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. pp. 8662–8671. doi: 10.1109/cvpr.2019.00887. ISBN  978-1-7281-3293-8. S2CID  57189202.

External links


Videos

Youtube | Vimeo | Bing

Websites

Google | Yahoo | Bing

Encyclopedia

Google | Yahoo | Bing

Facebook