Browsing by Author "Schwendicke, Falk"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item Artificial intelligence chatbots and large language models in dental education : Worldwide survey of educators(2024-11) Uribe, Sergio E; Maldupa, Ilze; Kavadella, Argyro; El Tantawi, Maha; Chaurasia, Akhilanand; Fontana, Margherita; Marino, Rodrigo; Innes, Nicola; Schwendicke, Falk; Department of Conservative Dentistry and Oral HealthINTRODUCTION: Interest is growing in the potential of artificial intelligence (AI) chatbots and large language models like OpenAI's ChatGPT and Google's Gemini, particularly in dental education. To explore dental educators' perceptions of AI chatbots and large language models, specifically their potential benefits and challenges for dental education. MATERIALS AND METHODS: A global cross-sectional survey was conducted in May-June 2023 using a 31-item online-questionnaire to assess dental educators' perceptions of AI chatbots like ChatGPT and their influence on dental education. Dental educators, representing diverse backgrounds, were asked about their use of AI, its perceived impact, barriers to using chatbots, and the future role of AI in this field. RESULTS: 428 dental educators (survey views = 1516; response rate = 28%) with a median [25/75th percentiles] age of 45 [37, 56] and 16 [8, 25] years of experience participated, with the majority from the Americas (54%), followed by Europe (26%) and Asia (10%). Thirty-one percent of respondents already use AI tools, with 64% recognising their potential in dental education. Perception of AI's potential impact on dental education varied by region, with Africa (4[4-5]), Asia (4[4-5]), and the Americas (4[3-5]) perceiving more potential than Europe (3[3-4]). Educators stated that AI chatbots could enhance knowledge acquisition (74.3%), research (68.5%), and clinical decision-making (63.6%) but expressed concern about AI's potential to reduce human interaction (53.9%). Dental educators' chief concerns centred around the absence of clear guidelines and training for using AI chatbots. CONCLUSION: A positive yet cautious view towards AI chatbot integration in dental curricula is prevalent, underscoring the need for clear implementation guidelines.Item Core outcomes measures in dental computer vision studies (DentalCOMS)(2024-11) Büttner, Martha; Rokhshad, Rata; Brinz, Janet; Issa, Julien; Chaurasia, Akhilanand; Uribe, Sergio E.; Karteva, Teodora; Chala, Sanaa; Tichy, Antonin; Schwendicke, Falk; Department of Conservative Dentistry and Oral HealthObjectives: To improve reporting and comparability as well as to reduce bias in dental computer vision studies, we aimed to develop a Core Outcome Measures Set (COMS) for this field. The COMS was derived consensus based as part of the WHO/ITU/WIPO Global Initiative AI for Health (WHO/ITU/WIPO AI4H). Methods: We first assessed existing guidance documents of diagnostic accuracy studies and conducted interviews with experts in the field. The resulting list of outcome measures was mapped against computer vision modeling tasks, clinical fields and reporting levels. The resulting systematization focused on providing relevant outcome measures whilst retaining details for meta-research and technical replication, displaying recommendations towards (1) levels of reporting for different clinical fields and tasks, and (2) outcome measures. The COMS was consented using a 2-staged e-Delphi, with 26 participants from various IADR groups, the WHO/ITU/WIPO AI4H, ADEA and AAOMFR. Results: We assigned agreed levels of reporting to different computer vision tasks. We agreed that human expert assessment and diagnostic accuracy considerations are the only feasible method to achieve clinically meaningful evaluation levels. Studies should at least report on eight core outcome measures: confusion matrix, accuracy, sensitivity, specificity, precision, F-1 score, area-under-the-receiver-operating-characteristic-curve, and area-under-the-precision-recall-curve. Conclusion: Dental researchers should aim to report computer vision studies along the outlined COMS. Reviewers and editors may consider the defined COMS when assessing studies, and authors are recommended to justify when not employing the COMS. Clinical significance: Comparing and synthesizing dental computer vision studies is hampered by the variety of reported outcome measures. Adherence to the defined COMS is expected to increase comparability across studies, enable synthesis, and reduce selective reporting.Item COVID-19-related research data availability and quality according to the FAIR principles : A meta-research study(2024-11-18) Sofi-Mahmudi, Ahmad; Raittio, Eero; Khazaei, Yeganeh; Ashraf, Javed; Schwendicke, Falk; Uribe, Sergio E.; Moher, David; Department of Conservative Dentistry and Oral HealthBACKGROUND: According to the FAIR principles (Findable, Accessible, Interoperable, and Reusable), scientific research data should be findable, accessible, interoperable, and reusable. The COVID-19 pandemic has led to massive research activities and an unprecedented number of topical publications in a short time. However, no evaluation has assessed whether this COVID-19-related research data has complied with FAIR principles (or FAIRness). OBJECTIVE: Our objective was to investigate the availability of open data in COVID-19-related research and to assess compliance with FAIRness. METHODS: We conducted a comprehensive search and retrieved all open-access articles related to COVID-19 from journals indexed in PubMed, available in the Europe PubMed Central database, published from January 2020 through June 2023, using the metareadr package. Using rtransparent, a validated automated tool, we identified articles with links to their raw data hosted in a public repository. We then screened the link and included those repositories that included data specifically for their pertaining paper. Subsequently, we automatically assessed the adherence of the repositories to the FAIR principles using FAIRsFAIR Research Data Object Assessment Service (F-UJI) and rfuji package. The FAIR scores ranged from 1-22 and had four components. We reported descriptive analysis for each article type, journal category, and repository. We used linear regression models to find the most influential factors on the FAIRness of data. RESULTS: 5,700 URLs were included in the final analysis, sharing their data in a general-purpose repository. The mean (standard deviation, SD) level of compliance with FAIR metrics was 9.4 (4.88). The percentages of moderate or advanced compliance were as follows: Findability: 100.0%, Accessibility: 21.5%, Interoperability: 46.7%, and Reusability: 61.3%. The overall and component-wise monthly trends were consistent over the follow-up. Reviews (9.80, SD = 5.06, n = 160), articles in dental journals (13.67, SD = 3.51, n = 3) and Harvard Dataverse (15.79, SD = 3.65, n = 244) had the highest mean FAIRness scores, whereas letters (7.83, SD = 4.30, n = 55), articles in neuroscience journals (8.16, SD = 3.73, n = 63), and those deposited in GitHub (4.50, SD = 0.13, n = 2,152) showed the lowest scores. Regression models showed that the repository was the most influential factor on FAIRness scores (R2 = 0.809). CONCLUSION: This paper underscored the potential for improvement across all facets of FAIR principles, specifically emphasizing Interoperability and Reusability in the data shared within general repositories during the COVID-19 pandemic.Item Deep Learning for Caries Detection : A Systematic Review(2022-07) Mohammad-rahimi, Hossein; Motamedian, Saeed Reza; Rohban, Mohammad Hossein; Krois, Joachim; Uribe, Sergio; Nia, Erfan Mahmoudi; Rokhshad, Rata; Nadimi, Mohadeseh; Schwendicke, Falk; Department of Conservative Dentistry and Oral HealthObjectives Detecting caries lesions is challenging for dentists, and deep learning models may help practitioners to increase accuracy and reliability. We aimed to systematically review deep learning studies on caries detection. Data We selected diagnostic accuracy studies that used deep learning models on dental imagery (including radiographs, photographs, optical coherence tomography images, near-infrared light transillumination images). The latest version of the quality assessment tool for diagnostic accuracy studies (QUADAS-2) tool was used for risk of bias assessment. Meta-analysis was not performed due to heterogeneity in the studies methods and their performance measurements. Sources Databases (Medline via PubMed, Google Scholar, Scopus, Embase) and a repository (ArXiv) were screened for publications published after 2010, without any limitation on language. Study selection From 252 potentially eligible references, 48 studies were assessed full-text and 42 included, using classification (n=26), object detection (n=6), or segmentation models (n=10). A wide range of performance metrics was used; image, object or pixel accuracy ranged between 68%-99%. The minority of studies (n=11) showed a low risk of biases in all domains, and 13 studies (31.0%) low risk for concerns regarding applicability. The accuracy of caries classification models varied, i.e. 71% to 96% on intra-oral photographs, 82% to 99.2% on peri-apical radiographs, 87.6% to 95.4% on bitewing radiographs, 68.0% to 78.0% on near-infrared transillumination images, 88.7% to 95.2% on optical coherence tomography images, and 86.1% to 96.1% on panoramic radiographs. Pooled diagnostic odds ratios varied from 2.27 to 32767. For detection and segmentation models, heterogeneity in reporting did not allow useful pooling. Conclusion An increasing number of studies investigated caries detection using deep learning, with a diverse types of architectures being employed. Reported accuracy seems promising, while study and reporting quality are currently low. Clinical significance Deep learning models can be considered as an assistant for decisions regarding the presence or absence of carious lesions.Item Publicly Available Dental Image Datasets for Artificial Intelligence(2024) Uribe, Sergio E.; Issa, Julien; Sohrabniya, F.; Denny, A.; Kim, N.N.; Dayo, A. F.; Chaurasia, Akhilanand; Sofi-Mahmudi, Ahmad; Büttner, Martha; Schwendicke, Falk; Department of Conservative Dentistry and Oral HealthThe development of artificial intelligence (AI) in dentistry requires large and well-annotated datasets. However, the availability of public dental imaging datasets remains unclear. This study aimed to provide a comprehensive overview of all publicly available dental imaging datasets to address this gap and support AI development. This observational study searched all publicly available dataset resources (academic databases, preprints, and AI challenges), focusing on datasets/articles from 2020 to 2023, with PubMed searches extending back to 2011. We comprehensively searched for dental AI datasets containing images (intraoral photos, scans, radiographs, etc.) using relevant keywords. We included datasets of >50 images obtained from publicly available sources. We extracted dataset characteristics, patient demographics, country of origin, dataset size, ethical clearance, image details, FAIRness metrics, and metadata completeness. We screened 131,028 records and extracted 16 unique dental imaging datasets. The datasets were obtained from Kaggle (18.8%), GitHub, Google, Mendeley, PubMed, Zenodo (each 12.5%), Grand-Challenge, OSF, and arXiv (each 6.25%). The primary focus was tooth segmentation (62.5%) and labeling (56.2%). Panoramic radiography was the most common imaging modality (58.8%). Of the 13 countries, China contributed the most images (2,413). Of the datasets, 75% contained annotations, whereas the methods used to establish labels were often unclear and inconsistent. Only 31.2% of the datasets reported ethical approval, and 56.25% did not specify a license. Most data were obtained from dental clinics (50%). Intraoral radiographs had the highest findability score in the FAIR assessment, whereas cone-beam computed tomography datasets scored the lowest in all categories. These findings revealed a scarcity of publicly available imaging dental data and inconsistent metadata reporting. To promote the development of robust, equitable, and generalizable AI tools for dental diagnostics, treatment, and research, efforts are needed to address data scarcity, increase diversity, mandate metadata completeness, and ensure FAIRness in AI dental imaging research.Item Terminology of e-Oral Health : Consensus Report of the IADR's e-Oral Health Network Terminology Task Force(2024-02-28) Mariño, Rodrigo J; Uribe, Sergio E; Chen, Rebecca; Schwendicke, Falk; Giraudeau, Nicolas; Scheerman, Janneke F M; Department of Conservative Dentistry and Oral HealthOBJECTIVE: Authors reported multiple definitions of e-oral health and related terms, and used several definitions interchangeably, like mhealth, teledentistry, teleoral medicine and telehealth. The International Association of Dental Research e-Oral Health Network (e-OHN) aimed to establish a consensus on terminology related to digital technologies used in oral healthcare. METHOD: The Crowdsourcing Delphi method used in this study comprised of four main stages. In the first stage, the task force created a list of terms and definitions around digital health technologies based on the literature and established a panel of experts. Inclusion criteria for the panellists were: to be actively involved in either research and/or working in e-oral health fields; and willing to participate in the consensus process. In the second stage, an email-based consultation was organized with the panel of experts to confirm an initial set of terms. In the third stage, consisted of: a) an online meeting where the list of terms was presented and refined; and b) a presentation at the 2022-IADR annual meeting. The fourth stage consisted of two rounds of feedback to solicit experts' opinion about the terminology and group discussion to reach consensus. A Delphi-questionnaire was sent online to all experts to independently assess a) the appropriateness of the terms, and b) the accompanying definitions, and vote on whether they agreed with them. In a second round, each expert received an individualised questionnaire, which presented the expert's own responses from the first round and the panellists' overall response (% agreement/disagreement) to each term. It was decided that 70% or higher agreement among experts on the terms and definitions would represent consensus. RESULTS: The study led to the identification of an initial set of 43 terms. The list of initial terms was refined to a core set of 37 terms. Initially, 34 experts took part in the consensus process about terms and definitions. From them, 27 experts completed the first rounds of consultations, and 15 the final round of consultations. All terms and definitions were confirmed via online voting (i.e., achieving above the agreed 70% threshold), which indicate their agreed recommendation for use in e-oral health research, dental public health, and clinical practice. CONCLUSION: This is the first study in oral health organised to achieve consensus in e-oral health terminology. This terminology is presented as a resource for interested parties. These terms were also conceptualised to suit with the new healthcare ecosystem and the place of e-oral health within it. The universal use of this terminology to label interventions in future research will increase the homogeneity of future studies including systematic reviews.