Selective Training: A Strategy for Fast Backpropagation on Sentence Embeddings

Jan Neerbek, Peter Dolog, Ira Assent

Research output: Contribution to book/anthology/report/conference proceedingArticle in proceedingResearchpeer-review

Abstract

Representation or embedding based machine learning models, such as language models or convolutional neural networks have shown great potential for improved performance. However, for complex models on large datasets training time can be extensive, approaching weeks, which is often infeasible in practice. In this work, we present a method to reduce training time substantially by selecting training instances that provide relevant information for training. Selection is based on the similarity of the learned representations over input instances, thus allowing for learning a non-trivial weighting scheme from multi-dimensional representations. We demonstrate the efficiency and effectivity of our approach in several text classification tasks using recursive neural networks. Our experiments show that by removing approximately one fifth of the training data the objective function converges up to six times faster without sacrificing accuracy.

Original languageEnglish
Title of host publicationAdvances in Knowledge Discovery and Data Mining - 23rd Pacific-Asia Conference, PAKDD 2019, Macau, China, April 14-17, 2019, Proceedings, Part III
EditorsQiang Yang, Min-Ling Zhang, Zhiguo Gong, Sheng-Jun Huang, Zhi-Hua Zhou
Number of pages14
PublisherSpringer VS
Publication date2019
Pages40-53
ISBN (Print)978-3-030-16141-5
DOIs
Publication statusPublished - 2019
EventPacific-Asia Conference on Knowledge Discovery and Data Mining - Macau, China
Duration: 14 Apr 201917 Apr 2019
Conference number: 23rd

Conference

ConferencePacific-Asia Conference on Knowledge Discovery and Data Mining
Number23rd
CountryChina
CityMacau
Period14/04/201917/04/2019
SeriesLecture Notes in Computer Science
Volume11441
ISSN0302-9743

Fingerprint

Backpropagation
Neural networks
Learning systems
Experiments

Keywords

  • Machine learning
  • Neural network
  • Recursive models
  • Selective training

Cite this

Neerbek, J., Dolog, P., & Assent, I. (2019). Selective Training: A Strategy for Fast Backpropagation on Sentence Embeddings. In Q. Yang, M-L. Zhang, Z. Gong, S-J. Huang, & Z-H. Zhou (Eds.), Advances in Knowledge Discovery and Data Mining - 23rd Pacific-Asia Conference, PAKDD 2019, Macau, China, April 14-17, 2019, Proceedings, Part III (pp. 40-53). Springer VS. Lecture Notes in Computer Science, Vol.. 11441 https://doi.org/10.1007/978-3-030-16142-2_4
Neerbek, Jan ; Dolog, Peter ; Assent, Ira. / Selective Training: A Strategy for Fast Backpropagation on Sentence Embeddings. Advances in Knowledge Discovery and Data Mining - 23rd Pacific-Asia Conference, PAKDD 2019, Macau, China, April 14-17, 2019, Proceedings, Part III. editor / Qiang Yang ; Min-Ling Zhang ; Zhiguo Gong ; Sheng-Jun Huang ; Zhi-Hua Zhou. Springer VS, 2019. pp. 40-53 (Lecture Notes in Computer Science, Vol. 11441).
@inproceedings{2b6f7a3c2a8e47489882493ec2057668,
title = "Selective Training: A Strategy for Fast Backpropagation on Sentence Embeddings",
abstract = "Representation or embedding based machine learning models, such as language models or convolutional neural networks have shown great potential for improved performance. However, for complex models on large datasets training time can be extensive, approaching weeks, which is often infeasible in practice. In this work, we present a method to reduce training time substantially by selecting training instances that provide relevant information for training. Selection is based on the similarity of the learned representations over input instances, thus allowing for learning a non-trivial weighting scheme from multi-dimensional representations. We demonstrate the efficiency and effectivity of our approach in several text classification tasks using recursive neural networks. Our experiments show that by removing approximately one fifth of the training data the objective function converges up{\^A} to six times faster without sacrificing accuracy.",
keywords = "Machine learning, Neural network, Recursive models, Selective training",
author = "Jan Neerbek and Peter Dolog and Ira Assent",
year = "2019",
doi = "10.1007/978-3-030-16142-2_4",
language = "English",
isbn = "978-3-030-16141-5",
series = "Lecture Notes in Computer Science",
publisher = "Springer VS",
pages = "40--53",
editor = "Qiang Yang and Min-Ling Zhang and Zhiguo Gong and Sheng-Jun Huang and Zhi-Hua Zhou",
booktitle = "Advances in Knowledge Discovery and Data Mining - 23rd Pacific-Asia Conference, PAKDD 2019, Macau, China, April 14-17, 2019, Proceedings, Part III",

}

Neerbek, J, Dolog, P & Assent, I 2019, Selective Training: A Strategy for Fast Backpropagation on Sentence Embeddings. in Q Yang, M-L Zhang, Z Gong, S-J Huang & Z-H Zhou (eds), Advances in Knowledge Discovery and Data Mining - 23rd Pacific-Asia Conference, PAKDD 2019, Macau, China, April 14-17, 2019, Proceedings, Part III. Springer VS, Lecture Notes in Computer Science, vol. 11441, pp. 40-53, Pacific-Asia Conference on Knowledge Discovery and Data Mining, Macau, China, 14/04/2019. https://doi.org/10.1007/978-3-030-16142-2_4

Selective Training: A Strategy for Fast Backpropagation on Sentence Embeddings. / Neerbek, Jan; Dolog, Peter; Assent, Ira.

Advances in Knowledge Discovery and Data Mining - 23rd Pacific-Asia Conference, PAKDD 2019, Macau, China, April 14-17, 2019, Proceedings, Part III. ed. / Qiang Yang; Min-Ling Zhang; Zhiguo Gong; Sheng-Jun Huang; Zhi-Hua Zhou. Springer VS, 2019. p. 40-53 (Lecture Notes in Computer Science, Vol. 11441).

Research output: Contribution to book/anthology/report/conference proceedingArticle in proceedingResearchpeer-review

TY - GEN

T1 - Selective Training: A Strategy for Fast Backpropagation on Sentence Embeddings

AU - Neerbek, Jan

AU - Dolog, Peter

AU - Assent, Ira

PY - 2019

Y1 - 2019

N2 - Representation or embedding based machine learning models, such as language models or convolutional neural networks have shown great potential for improved performance. However, for complex models on large datasets training time can be extensive, approaching weeks, which is often infeasible in practice. In this work, we present a method to reduce training time substantially by selecting training instances that provide relevant information for training. Selection is based on the similarity of the learned representations over input instances, thus allowing for learning a non-trivial weighting scheme from multi-dimensional representations. We demonstrate the efficiency and effectivity of our approach in several text classification tasks using recursive neural networks. Our experiments show that by removing approximately one fifth of the training data the objective function converges up to six times faster without sacrificing accuracy.

AB - Representation or embedding based machine learning models, such as language models or convolutional neural networks have shown great potential for improved performance. However, for complex models on large datasets training time can be extensive, approaching weeks, which is often infeasible in practice. In this work, we present a method to reduce training time substantially by selecting training instances that provide relevant information for training. Selection is based on the similarity of the learned representations over input instances, thus allowing for learning a non-trivial weighting scheme from multi-dimensional representations. We demonstrate the efficiency and effectivity of our approach in several text classification tasks using recursive neural networks. Our experiments show that by removing approximately one fifth of the training data the objective function converges up to six times faster without sacrificing accuracy.

KW - Machine learning

KW - Neural network

KW - Recursive models

KW - Selective training

UR - http://www.scopus.com/inward/record.url?scp=85065028468&partnerID=8YFLogxK

U2 - 10.1007/978-3-030-16142-2_4

DO - 10.1007/978-3-030-16142-2_4

M3 - Article in proceeding

SN - 978-3-030-16141-5

T3 - Lecture Notes in Computer Science

SP - 40

EP - 53

BT - Advances in Knowledge Discovery and Data Mining - 23rd Pacific-Asia Conference, PAKDD 2019, Macau, China, April 14-17, 2019, Proceedings, Part III

A2 - Yang, Qiang

A2 - Zhang, Min-Ling

A2 - Gong, Zhiguo

A2 - Huang, Sheng-Jun

A2 - Zhou, Zhi-Hua

PB - Springer VS

ER -

Neerbek J, Dolog P, Assent I. Selective Training: A Strategy for Fast Backpropagation on Sentence Embeddings. In Yang Q, Zhang M-L, Gong Z, Huang S-J, Zhou Z-H, editors, Advances in Knowledge Discovery and Data Mining - 23rd Pacific-Asia Conference, PAKDD 2019, Macau, China, April 14-17, 2019, Proceedings, Part III. Springer VS. 2019. p. 40-53. (Lecture Notes in Computer Science, Vol. 11441). https://doi.org/10.1007/978-3-030-16142-2_4