Works Cited
Alin 2010 Alin, A. (2010) “Multicollinearity.” Wiley Interdisciplinary Reviews:
Computational Statistics, 2(3), pp. 370–374.
Allen 1997 Allen, M. P. (1997) “The problem of multicollinearity”. In Allen, M.P. Understanding Regression Analysis. Berlin: Springer, pp. 176-180.
Argamon et al. 2009 Argamon, S., Goulain, J.-B., Horton, R., and Olsen, M. (2009) “Vive la différence! Text mining gender difference in french literature.” Digital Humanities Quarterly, 3(2).
Assael et al. 2022 Assael, Y., Sommerschield, T., Shillingford, B., Bordbar, M., Pavlopoulos, J., Chatzipanagiotou, M., Androutsopoulos, I., Prag, J., and de Freitas, N. (2022) Restoring and attributing ancient texts using deep neural networks. Nature, 603(7900), pp. 280–283.
Baledent, Hiebel, and Lejeune 2020 Baledent, A., Hiebel, N., and Lejeune, G. (2020) “Dating ancient texts: An approach for noisy French documents.” Language Resources and Evaluation Conference (LREC) 2020.
Belgiu and Drăguţ 2016 Belgiu, M., and Drăguţ, L. (2016) Random forest in remote sensing: A review of applications and future directions. ISPRS Journal of Photogrammetry and Remote Sensing, 114, pp. 24–31.
Borisov et al. 2022 Borisov, V., Leemann, T., Seßler, K., Haug, J., Pawelczyk, M., & Kasneci, G. (2022) “Deep Neural Networks and Tabular Data: A Survey.” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–21.
https://doi.org/10.1109/TNNLS.2022.3229161
Breiman 2001 Breiman, L. (2001). “Random forests.” Machine
Learning, 45(1), pp. 5–32.
Brigato and Iocchi 2021 Brigato, L., and Iocchi, L. (2021) “A Close Look at Deep Learning with Small Data.” 2020 25th International Conference on Pattern Recognition (ICPR), pp. 2490–2497.
https://doi.org/10.1109/ICPR48806.2021.9412492
Chan et al. 2022 Chan, J. Y. L., Leow, S. M. H., Bea, K. T., Cheng, W. K., Phoong, S. W., Hong, Z. W., and Chen, Y. L. (2022) “Mitigating the multicollinearity problem and its machine learning approach: a review”. Mathematics, 10(8), 1283.
Chen and Guestrin 2016 Chen, T., and Guestrin, C. (2016) “XGBoost: A Scalable Tree Boosting System.” Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 785–794.
https://doi.org/10.1145/2939672.2939785
Claesen and DeMoor 2015 Claesen, M., and De Moor, B. (2015) Hyperparameter Search in Machine
Learning (arXiv:1502.02127). arXiv. https://doi.org/10.48550/arXiv.1502.02127
Claesen et al. 2014 Claesen, M., Simm, J., Popovic, D., Moreau, Y., and De Moor, B. (2014) Easy Hyperparameter Search Using Optunity (arXiv:1412.1114). arXiv. https://doi.org/10.48550/arXiv.1412.1114
Drucker et al. 1996 Drucker, H., Burges, C. J., Kaufman, L., Smola, A., and Vapnik, V. (1996) “Support vector regression machines.” Advances in Neural Information Processing Systems, 9.
Elliott, Bodard, Cayless, et al. 2006 Elliott, Tom, Bodard, Gabriel, and Cayless, Hugh et al. (2006, 2022)
EpiDoc: Epigraphic Documents in TEI XML.
https://epidoc.stoa.org/ Emmanuel et al. 2021 Emmanuel, T., Maupong, T., Mpoeleng, D., Semong, T., Mphago, B., and Tabona, O. (2021) “A survey on missing data in machine learning.” Journal of Big Data, 8(1), pp. 1–37.
Finegold et al. 2016 Finegold, M., Otis, J., Shalizi, C., Shore, D., Wang, L., and Warren, C. (2016) “Six degrees of Francis Bacon: A statistical method for reconstructing large historical social networks.” Digital Humanities Quarterly, 10(3).
Fragkiadakis, Nyst, and Putten 2021 Fragkiadakis, M., Nyst, V., and Putten, P. van der. (2021) “Towards a User-Friendly Tool for Automated Sign Annotation: Identification and Annotation of Time Slots, Number of Hands, and Handshape.” Digital Humanities Quarterly, 15(1).
Goodfellow, Benigo, and Courville 2016 Goodfellow, I., Bengio, Y., and Courville, A. (2016) Deep Learning. MIT press.
Hastie et al. 2009 Hastie, T., Tibshirani, R., Friedman, J. H., and Friedman, J. H. (2009) The Elements of Statistical Learning: Data Mining, Inference, and Prediction (Vol. 2). Springer.
Johnson and Khoshgoftaar 2019 Johnson, J. M., and Khoshgoftaar, T. M. (2019) “Survey on deep learning with class imbalance.” Journal of Big Data, 6(1), 27. https://doi.org/10.1186/s40537-019-0192-5
LeCun, Benigo, and Hinton 2015 LeCun, Y., Bengio, Y., and Hinton, G. (2015) “Deep learning.” Nature, 521 (7553), pp. 436–444.
Liaw and Wiener 2002 Liaw, A., and Wiener, M. (2002) “Classification and regression by randomForest.” R News, 2(3), pp. 18–22.
Niculae et al. 2014 Niculae, V., Zampieri, M., Dinu, L., and Ciobanu, A. M. (2014) “Temporal Text Ranking and Automatic Dating of Texts.” Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, Volume 2: Short Papers, pp. 17–21. https://doi.org/10.3115/v1/E14-4004
Satlow 2022 Satlow, M. L. (2022) “Inscriptions of Israel/Palestine.” Jewish Studies Quarterly
(JSQ), 29(4), pp. 349–369. https://doi.org/10.1628/jsq-2022-0021
Shwartz-Ziv and Armon 2022 Shwartz-Ziv, R., and Armon, A. (2022) “Tabular data: Deep learning is not all you need.” Information Fusion, 81, pp. 84–90.
Zhitomirsky-Geffet et al. 2020 Zhitomirsky-Geffet, Maayan, Gila Prebor, and Isaac Miller. (2020) “Ontology-based analysis of the large collection of historical Hebrew manuscripts.” Digital Scholarship in the Humanities, 35(3), pp. 688–719. https://doi.org/10.1093/llc/fqz058
Zou and Hastie 2005 Zou, H., & Hastie, T. (2005). “Regularization and variable selection via the elastic net.” Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2), pp. 301–320.