새 문서 목록

IT 위키
새 문서 목록
등록된 사용자 숨기기 | 봇 숨기기 | 넘겨주기를 보이기
(최신 | 오래됨) (다음 50개 | ) (20 | 50 | 100 | 250 | 500) 보기
  • 2024년 12월 16일 (월) 03:03Distributed Two-Phase Locking (역사 | 편집) ‎[4,665 바이트]Betripping (토론 | 기여) (Created page with "'''Distributed Two-Phase Locking (Distributed 2PL)''' is a concurrency control protocol used in distributed database systems to ensure global serializability. In this protocol, each site manages locks independently while coordinating with other sites to maintain consistency across the distributed system. ==Key Concepts== *'''Global Serializability:''' Ensures that the execution of distributed transactions is equivalent to a serial execution. *'''Distributed Lock Manageme...") 태그: 시각 편집
  • 2024년 12월 16일 (월) 02:48Primary Copy Two-Phase Locking (역사 | 편집) ‎[4,224 바이트]Betripping (토론 | 기여) (Created page with "'''Primary Copy Two-Phase Locking (2PL)''' is a concurrency control protocol used in distributed database systems where a primary copy of each data item is designated for lock management. Transactions must obtain locks from the primary copy of the data item to ensure consistency and serializability across the system. ==Key Concepts== *'''Primary Copy:''' A specific node in the distributed system is designated as the authoritative source for managing locks for a particula...") 태그: 시각 편집
  • 2024년 12월 16일 (월) 02:36Primary Site Two-Phase Locking (역사 | 편집) ‎[3,711 바이트]Betripping (토론 | 기여) (Created page with "'''Primary Site Two-Phase Locking (2PL)''' is a variant of the Two-Phase Locking protocol used in distributed databases. In this approach, one node is designated as the primary site for each data item, and it coordinates the locking and transaction processing to ensure serializability across the system. ==Key Concepts== *'''Primary Site:''' A designated node that manages all lock requests for a specific data item. *'''Global Serializability:''' Ensures that all transacti...") 태그: 시각 편집
  • 2024년 12월 16일 (월) 02:35Distributed Database (역사 | 편집) ‎[5,341 바이트]Betripping (토론 | 기여) (Created page with "'''Distributed Database''' is a collection of databases distributed across multiple physical locations that function as a single logical database. Each site can operate independently while participating in a unified database system through communication over a network. ==Key Concepts== *'''Data Distribution:''' Data is distributed across multiple sites based on factors like performance, reliability, and locality. *'''Transparency:''' Users interact with the distributed d...") 태그: 시각 편집
  • 2024년 12월 16일 (월) 02:29Distributed Computing (역사 | 편집) ‎[4,523 바이트]Betripping (토론 | 기여) (Created page with "'''Distributed Computing''' is a field of computer science that involves a collection of independent computers working together as a single cohesive system. These computers, or nodes, communicate over a network to achieve a common goal, such as solving complex problems, processing large datasets, or enabling fault-tolerant services. ==Key Concepts== *'''Distributed System:''' A collection of independent computers that appear to the user as a single system. *'''Concurrenc...") 태그: 시각 편집
  • 2024년 12월 16일 (월) 02:27Distributed Query Processing (역사 | 편집) ‎[4,991 바이트]Betripping (토론 | 기여) (Created page with "'''Distributed Query Processing''' is the process of executing database queries across multiple interconnected nodes in a distributed database system. It involves decomposing a high-level query into sub-queries that are executed on different nodes, combining the results, and presenting a unified output to the user. ==Key Concepts== *'''Distributed Database:''' A collection of interconnected databases located on different physical sites. *'''Query Decomposition:''' Breaki...") 태그: 시각 편집
  • 2024년 12월 13일 (금) 01:40Multiversion Concurrency Control (역사 | 편집) ‎[3,490 바이트]Scapegoat (토론 | 기여) (Created page with "'''Multiversion Concurrency Control (MVCC)''' is a concurrency control method used in database systems that allows multiple versions of a data item to exist simultaneously. It ensures consistent reads without locking and provides high concurrency by maintaining transaction isolation. ==Key Concepts== *'''Data Versioning:''' MVCC creates a new version of a data item for every write operation, enabling transactions to access consistent snapshots. *'''Timestamps:''' Each tr...") 태그: 시각 편집
  • 2024년 12월 13일 (금) 01:39Timestamp-Based Protocol (역사 | 편집) ‎[3,700 바이트]Scapegoat (토론 | 기여) (Created page with "'''Timestamp-Based Protocol''' is a concurrency control mechanism in database systems that uses timestamps to order transactions, ensuring serializability. Each transaction is assigned a unique timestamp when it begins, and the protocol uses these timestamps to determine the execution order of conflicting operations. ==Key Concepts== *'''Timestamp:''' A unique identifier (usually based on the system clock or a counter) assigned to each transaction when it starts. *'''Rea...") 태그: 시각 편집
  • 2024년 12월 13일 (금) 01:35Lock-Based Protocol (역사 | 편집) ‎[3,804 바이트]Scapegoat (토론 | 기여) (Created page with "'''Lock-Based Protocol''' is a concurrency control mechanism in database systems that ensures transaction isolation by managing access to data items through locks. Locks prevent simultaneous transactions from interfering with each other, ensuring data consistency and serializability. ==Key Concepts== *'''Lock:''' A mechanism that restricts access to a data item for concurrent transactions. *'''Lock Modes:''' **'''Shared Lock (S):''' Allows multiple transactions to read a...") 태그: 시각 편집
  • 2024년 12월 13일 (금) 01:33Two-Phase Locking (역사 | 편집) ‎[3,740 바이트]Scapegoat (토론 | 기여) (Created page with "'''Two-Phase Locking (2PL)''' is a concurrency control protocol used in database systems to ensure serializability of transactions. It operates in two distinct phases where locks are acquired and released in a specific order, preventing conflicts between concurrent transactions. ==Key Concepts== *'''Locking Protocol:''' Transactions acquire locks on data items before accessing them and release locks when no longer needed. *'''Serializability:''' 2PL guarantees that the r...") 태그: 시각 편집
  • 2024년 12월 13일 (금) 01:31Concurrency Control (역사 | 편집) ‎[3,880 바이트]Scapegoat (토론 | 기여) (Created page with "'''Concurrency Control''' is a mechanism in database management systems (DBMS) that ensures correct and consistent transaction execution in a multi-user environment. It prevents issues such as data inconsistencies and anomalies by managing simultaneous access to the database. ==Key Concepts== *'''Transaction:''' A sequence of database operations that are executed as a single logical unit of work. *'''Isolation:''' Ensures that each transaction is executed independently w...") 태그: 시각 편집
  • 2024년 12월 13일 (금) 01:28Isolation Level (Database) (역사 | 편집) ‎[4,095 바이트]Scapegoat (토론 | 기여) (Created page with "'''Isolation Level''' is a property in database transaction management that defines the extent to which transactions are isolated from each other. It determines how and when changes made by one transaction are visible to other concurrent transactions, affecting the trade-off between consistency and concurrency. ==Key Concepts== *'''Transaction Isolation:''' Ensures that concurrent transactions do not interfere with each other inappropriately. *'''Anomalies:''' Lower isol...") 태그: 시각 편집
  • 2024년 12월 13일 (금) 01:26Non-Repeatable Read (역사 | 편집) ‎[3,785 바이트]Scapegoat (토론 | 기여) (Created page with "'''Non-Repeatable Read''' is a concurrency problem in database systems that occurs when a transaction reads the same data twice and gets different results due to modifications made by another transaction in the meantime. This inconsistency arises when isolation levels do not guarantee stability for repeated reads of the same data. ==Key Concepts== *'''Unstable Reads:''' The data read by a transaction changes during its execution because another transaction modifies it. *...") 태그: 시각 편집
  • 2024년 12월 13일 (금) 01:26Dirty Read (Database) (역사 | 편집) ‎[3,626 바이트]Scapegoat (토론 | 기여) (Created page with "'''Dirty Read''' is a concurrency problem in database systems that occurs when a transaction reads uncommitted changes made by another transaction. This can lead to inconsistent or incorrect data being used in the reading transaction, especially if the changes are later rolled back. ==Key Concepts== *'''Uncommitted Data:''' Data modified by a transaction that has not yet been committed to the database. *'''Concurrency Issue:''' Dirty reads are a type of Database Anomal...") 태그: 시각 편집
  • 2024년 12월 13일 (금) 01:23Precedence Graph (역사 | 편집) ‎[2,874 바이트]Scapegoat (토론 | 기여) (Created page with "'''Precedence Graph''', also known as a '''Serializability Graph''', is a directed graph used in database systems to determine the serializability of a transaction schedule. The graph represents the order of operations across transactions and helps identify potential conflicts that could prevent a schedule from being equivalent to a serial schedule. ==Key Concepts== *'''Nodes:''' Represent individual transactions in the schedule. *'''Edges:''' Indicate a precedence (or d...") 태그: 시각 편집
  • 2024년 12월 12일 (목) 07:49Serializable (Database) (역사 | 편집) ‎[4,608 바이트]Betripping (토론 | 기여) (Created page with "'''Serializable''' is the highest level of isolation in database transaction management, ensuring that the outcome of concurrent transactions is equivalent to some sequential (serial) execution of those transactions. It prevents issues like dirty reads, non-repeatable reads, and phantom reads, guaranteeing the consistency of the database. ==Key Concepts== *'''Isolation Level:''' Serializable is one of the Isolation Levels defined in the SQL standard, providing the st...") 태그: 시각 편집
  • 2024년 12월 11일 (수) 06:23Write-Ahead Logging (역사 | 편집) ‎[3,853 바이트]Betripping (토론 | 기여) (Created page with "'''Write-Ahead Logging (WAL)''' is a technique used in database management systems (DBMS) to ensure the '''durability''' and '''atomicity''' of transactions. WAL ensures that all changes to the database are recorded in a log before the changes are written to the database itself. This enables reliable crash recovery and consistency in case of a failure. ==Key Concepts== *'''Sequential Logging:''' All modifications are recorded sequentially i...") 태그: 시각 편집
  • 2024년 12월 11일 (수) 06:19Durability (ACID) (역사 | 편집) ‎[2,966 바이트]Betripping (토론 | 기여) (Created page with "'''Durability''' is one of the four key properties in the ACID Properties of database transactions, ensuring that once a transaction has been committed, its changes are permanently saved in the database. This guarantees that data is not lost even in the event of a system crash, power failure, or hardware malfunction. ==Key Concepts== *'''Transaction Commit:''' Durability ensures that when a transaction is committed, its effects are persisted. *'''Non-...") 태그: 시각 편집
  • 2024년 12월 11일 (수) 05:38Multiple Granularity Locking (역사 | 편집) ‎[4,134 바이트]Betripping (토론 | 기여) (Created page with "'''Multiple Granularity Locking''' is a concurrency control mechanism used in database systems to improve performance and manage locks at various levels of granularity, such as tuples, pages, tables, or the entire database. By allowing transactions to lock data items at different granular levels, this method optimizes the balance between concurrency and locking overhead. ==Key Concepts== *'''Lock Granularity:''' Refers to the size of the data item on which a lock is appl...") 태그: 시각 편집
  • 2024년 12월 11일 (수) 05:34Phantom Record (역사 | 편집) ‎[4,455 바이트]Betripping (토론 | 기여) (Created page with "'''Phantom Record''' refers to a data entry that appears in a dataset but does not correspond to a real-world entity or valid data point. Phantom records can occur due to system errors, data corruption, improper database handling, or intentional insertion during testing or attacks. ==Causes of Phantom Records== Phantom records can arise from various sources: *'''Data Entry Errors:''' Manual input mistakes resulting in duplicate or incorrect records. *'''System Errors:'''...") 태그: 시각 편집
  • 2024년 12월 8일 (일) 03:56Neural Processing Unit (역사 | 편집) ‎[3,907 바이트]162.158.159.66 (토론) (새 문서: '''Neural Processing Unit (NPU)''' is a specialized hardware accelerator designed to perform computations for artificial intelligence (AI) and machine learning (ML) workloads, particularly neural network operations. NPUs are optimized for tasks like matrix multiplications and convolutional operations, which are central to deep learning models. == Key Features of NPUs == NPUs offer the following features: * '''High Performance:''' Accelerate AI computations, providing significan...)
  • 2024년 12월 8일 (일) 03:55Open Neural Network Exchange (역사 | 편집) ‎[3,972 바이트]162.158.159.65 (토론) (새 문서: '''ONNX (Open Neural Network Exchange)''' is an open-source format for representing machine learning models. It enables interoperability between different machine learning frameworks and tools, allowing developers to train models in one framework and deploy them in another. ONNX supports a wide range of machine learning and deep learning models. == Key Features of ONNX == * '''Interoperability:''' Facilitates seamless model transfer between frameworks like PyTorch, TensorFlow,...)
  • 2024년 12월 8일 (일) 03:54LM Studio (역사 | 편집) ‎[3,580 바이트]162.158.63.106 (토론) (새 문서: '''LM Studio''' is an advanced tool for developing, training, and deploying large language models (LLMs). It provides an integrated platform for researchers and developers to experiment with state-of-the-art natural language processing (NLP) models. LM Studio simplifies the process of handling large-scale datasets, configuring model architectures, and optimizing performance for various applications. ==Key Features== *'''Model Training:''' Enables training of large language model...) 태그: 시각 편집: 전환됨
  • 2024년 12월 8일 (일) 03:52Lmstudio.js (역사 | 편집) ‎[3,792 바이트]162.158.158.113 (토론) (새 문서: '''lmstudio.js''' is a JavaScript library designed for building, managing, and deploying machine learning models directly within web browsers. It provides tools for creating lightweight machine learning applications, enabling client-side inference and integrating pre-trained models into web-based platforms. With its focus on usability and performance, lmstudio.js simplifies machine learning workflows for web developers. ==Key Features== *'''Client-Side Machine Learning:''' Perfo...) 태그: 시각 편집
  • 2024년 12월 3일 (화) 06:21Model Evaluation (역사 | 편집) ‎[4,789 바이트]Deposition (토론 | 기여) (새 문서: '''Model Evaluation''' refers to the process of assessing the performance of a machine learning model on a given dataset. It is a critical step in machine learning workflows to ensure that the model generalizes well to unseen data and performs as expected for the target application. ==Objectives of Model Evaluation== The key objectives of model evaluation are: *'''Assess Performance:''' Measure how well the model predicts outcomes. *'''Compare Models:''' Evaluate multiple models...) 태그: 시각 편집
  • 2024년 12월 2일 (월) 13:18Observational Machine Learning Method (역사 | 편집) ‎[4,378 바이트]Dendrogram (토론 | 기여) (새 문서: '''Observational Machine Learning Methods''' are techniques designed to analyze data collected from observational studies rather than controlled experiments. In such studies, the assignment of treatments or interventions is not randomized, which can introduce biases and confounding factors. Observational ML methods aim to identify patterns, relationships, and causal effects within these datasets. ==Key Challenges in Observational Data== Observational data often comes with inhere...) 태그: 시각 편집
  • 2024년 12월 2일 (월) 13:13Propensity Score Matching (역사 | 편집) ‎[4,262 바이트]Dendrogram (토론 | 기여) (새 문서: '''Propensity Score Matching (PSM)''' is a statistical technique used in observational studies to reduce selection bias when estimating the causal effect of a treatment or intervention. It involves pairing treated and untreated units with similar propensity scores, which represent the probability of receiving the treatment based on observed covariates. ==Key Concepts== *'''Propensity Score:''' The probability of a unit receiving the treatment, given its covariates. *'''Matching:...) 태그: 시각 편집
  • 2024년 12월 2일 (월) 13:13Causal Graph (역사 | 편집) ‎[3,695 바이트]Dendrogram (토론 | 기여) (새 문서: '''Causal Graph''' is a directed graph used to represent causal relationships between variables in a dataset. Each node in the graph represents a variable, and directed edges (arrows) indicate causal influence from one variable to another. Causal graphs are widely used in causal inference, machine learning, and decision-making processes. ==Key Components of a Causal Graph== A causal graph typically consists of the following: *'''Nodes:''' Represent variables in the system (e.g.,...) 태그: 시각 편집
  • 2024년 12월 2일 (월) 13:12Data Science Contents (역사 | 편집) ‎[1,947 바이트]Dendrogram (토론 | 기여) (새 문서: === 1. Understanding Data Science === * What is Data Science? * Impact on Business * Key Technologies in Data Science === 2. Data Preparation and Preprocessing === * Data Collection * Handling '''Missing Data''' and '''Outlier'''s * Normalization and Standardization === 3. Exploratory Data Analysis (EDA) === * Goals of Data Analysis * Basic Statistical Analysis * Importance of Data Visualization === 4. Supervised Learning === *...) 태그: 시각 편집
(최신 | 오래됨) (다음 50개 | ) (20 | 50 | 100 | 250 | 500) 보기