publications
publications by categories in reversed chronological order. generated by jekyll-scholar.
2019
- Blurred Images Lead to Bad Local MinimaDaphna Weinshall Gal KatzhendlerCoRR 2019
High Initial visual Acuity (HIA) in newborns treated for cataracts, it is argued by Vogelsang et al. (2018), may cause impairments in configural face analysis. This HIA hypothesis is contrary to the standard explanation by a critical period for learning face processing. The hypothesis is supported by com- putational experiments with an artificial neural network. In our work we argue that the computational methodology used to evaluate the HIA hypothesis is flawed. It essentially shows that when a classifier is tested with images from different resolu- tions, the classifier benefits from seeing images from different resolution during its training. We therefore offer a better-fitting methodology; employing the modified methodology - the HIA hypothesis does not hold in simulations using the same artifi- cial neural network model, and the same data. Vogelsang et al. (2018) also show that initial exposure to low resolution images gives rise to larger receptive fields. Our last set of experiments tests the hypothesis that this might be the underlying reason for the observed impairments. Once again, we are unable to find an advantage to training with images of low initial acuity. We therefore conclude that simulations with artificial networks do not support the hypothesis that High Initial visual Acuity is detrimental.
- Potential upside of high initial visual acuity?Daphna Weinshall Gal KatzhendlerProceedings of the National Academy of Sciences 2019
Vogelsang et al. (1) argue that high initial visual acuity in children, who underwent late treatment of congenital blindness, may be responsible for subsequent impairments in configural face analysis. This hypothesis offers an exciting alternative to the standard explanation of a critical period for the ensuing impairment, which could have dramatic implications on the follow-up treatment of such children. However, close inspection of the supporting computational argument provided in ref. 1 casts doubt on this conclusion. After broadening the analysis, we conclude that the computational study in ref. 1 cannot be used as evidence that initial exposure to low-resolution images is necessarily beneficial. We were also unable to confirm another proposed implication, that initial presentation of low-resolution images can improve the generalization performance of artificial neural networks in general, by promoting the development of larger receptive fields (2).
2020
2022
- Approximate Description Length, Covering Numbers, and VC DimensionGal Katzhendler Amit Daniely2022
Recently, Daniely and Granot introduced a new notion of complexity called Approximate Description Length (ADL). They used it to derive novel generalization bounds for neural networks, that despite substantial work, were out of reach for more classical techniques such as discretization, Covering Numbers and Rademacher Complexity. In this paper we explore how ADL relates to classical notions of function complexity such as Covering Numbers and VC Dimension. We find that for functions whose range is the reals, ADL is essentially equivalent to these classical complexity measures. However, this equivalence breaks for functions with high dimensional range.