AI REPORTS

AI Reports / Working Notes

AIを便利な道具としてだけでなく、人間の問い・判断・記憶・責任をどう残すかという観点から整理したレポート群です。英語タイトル、English Abstract、最重要資料の1ページ英語版を追加しました。

A working collection on AI, human judgment, memory governance, personalization, Roundtable AI, and sustainable AI infrastructure. English titles, abstracts, and the first one-page brief are now available.

本页整理与 AI、人类判断、记忆治理、个性化、圆桌 AI 和可持续 AI 基础设施相关的工作文档。已加入英文标题、英文摘要和首份一页英文简报。

AI, 인간의 판단, 기억 거버넌스, 개인화, 원탁 AI, 지속 가능한 AI 인프라에 관한 작업 문서 모음입니다. 영어 제목, 영문 초록, 첫 번째 1페이지 영문 브리프를 추가했습니다.

注記:本ページの政策提言・Working Papersは、現時点では公的機関の公式見解ではなく、独立した提案・仮説・実践記録として公開しています。

Note: The policy proposals and working papers on this page are not official statements of any public institution. They are shared as independent proposals, hypotheses, and field-based reflections.

English Titles Confirmed

資料一覧 / Documents

Japanese title

対話AIにおける文脈依存性と解釈フレームの変化

Context Dependence and Shifting Interpretive Frames in Conversational AI

An exploratory observation of the same model with and without shared context

同じGPTであっても、共有文脈の有無によって返答内容だけでなく、文書を「何として読むか」という解釈フレームが変わることを観察した探索的ノート。

Japanese DOCX
Japanese title

生成AI利用に関する基本方針

Basic Policy for the Use of Generative AI in Education

A human judgment-centered approach for universities, schools, and laboratories

AI利用を一律に禁止せず、問い・判断・責任の所在を明確にする教育機関向けルール案。大学、高校・専門学校、研究室用に整理。

Japanese DOCX
Japanese title

個人版AIにおける人間の主体性維持と、持続可能な設計猶予の確保について

Preserving Human Agency in Personal AI

A proposal for sustainable design time in the age of personalized AI

個人版AIにおける単一AI依存、即答慣れ、過度なパーソナライズ、認知環境固定化を、プラットフォーム側の長期リスクとして整理した最重要提案。

Japanese DOCXOne-page English brief
Japanese title

円卓AI構想マニフェスト

The Roundtable AI Manifesto

What does it mean to support humans?

GPT、Gemini、Claudeとの長期対話から、複数AIを正解比較ではなく視点発見・協創の場として捉える思想原典。

Japanese DOCX
Japanese title

AIインフラ時代における持続可能な社会基盤の構築に関する提言

Building Sustainable Social Infrastructure in the Age of AI

A phased implementation model for ZEN LAMP and Roundtable AI

ZEN LAMPと円卓AIを、人間の思考主権を守る社会基盤として政策的に整理した提出用提言。北日本AIインフラ構想にも接続。

Japanese DOCX

English Abstracts

English Abstract

Context Dependence and Shifting Interpretive Frames in Conversational AI

An exploratory observation of the same model with and without shared context

This working paper examines a simple but important observation: the same conversational AI model may respond differently not only because of changes in the prompt, but because of the presence or absence of shared context. The author compares responses from a GPT instance that had accumulated long-term conversational context with responses from a context-free GPT session. The point of interest is not merely whether the tone becomes warmer or colder, but whether the model changes the interpretive frame through which it reads the same material.

With shared context, the model tended to treat the submitted text as part of an ongoing design philosophy, connecting it to the author’s previous concerns, values, and project continuity. Without shared context, the same or similar material was read more as a general proposal, policy document, or institutional design draft. This difference suggests that personalization affects not only answers, but also the prior act of meaning-making: what the AI believes the document is, what problem it thinks is being addressed, and what criteria it uses to evaluate it.

The paper argues that this phenomenon may become more significant as personal AI systems become more deeply embedded in daily reflection, decision-making, education, and creative work. If each user’s AI develops a different interpretive frame, society may face not only divergent answers but divergent cognitive environments. The risk is not simply misinformation; it is cognitive lock-in, where a user gradually becomes accustomed to a particular way of framing reality.

As a preliminary response, the paper points toward comparison, reflective prompting, non-personalized layers, and multi-perspective structures such as ZEN LAMP and Roundtable AI. These are proposed not as final solutions, but as design patterns that preserve the possibility of re-entry into alternative viewpoints.

Japanese full text
English Abstract

Basic Policy for the Use of Generative AI in Education

A human judgment-centered approach for universities, schools, and laboratories

This policy draft proposes a practical framework for the use of generative AI in educational settings. Its central principle is that the key question is not simply whether AI was used, but whether the student or researcher retained responsibility for the question, the reasoning process, the verification, and the final judgment. Instead of treating all AI use as misconduct, the document distinguishes between acceptable assistance, gray-zone use that requires disclosure, and clear misuse.

Acceptable uses include outlining, editing, summarizing, brainstorming, checking grammar, generating questions, and receiving feedback on drafts. Gray-zone uses, such as heavily revising AI-generated suggestions or using AI summaries as a starting point for independent reconstruction, may be allowed when the user can explain which parts were AI-assisted and which parts were decided by the human author. Misuse includes submitting work generated almost entirely by AI, using unverified AI output as fact or citation, concealing AI use, delegating the core research question or conclusion to AI, and reproducing fabricated references or data.

A strength of the proposal is that it avoids over-reliance on AI detectors. It explicitly warns that polished writing alone should not be treated as evidence of misconduct, and that checker results should not be the sole basis for judgment. Instead, instructors may use oral explanation, short questioning, drafts, logs, and process review to confirm whether the student understands the work.

The policy is written in multiple levels: a university version, a simplified version for high schools and vocational schools, and a practical laboratory version. Across all levels, the core rule remains the same: AI may support learning and research, but it must not replace the learner’s own thinking. The final responsibility always remains with the human author.

Japanese full text
English Abstract

Preserving Human Agency in Personal AI

A proposal for sustainable design time in the age of personalized AI

This proposal focuses on personal AI rather than enterprise AI. Enterprise systems often operate within institutional structures such as access control, audit trails, compliance rules, and shared responsibility. Personal AI, by contrast, is already entering more intimate domains: daily reflection, learning, creativity, emotional organization, interpersonal relationships, and judgment support. The paper argues that this creates a distinct design responsibility.

The central concern is not that AI will suddenly fail, but that convenience may gradually weaken the human cognitive environment. When users become accustomed to receiving answers before forming questions, accepting fluent wording before examining discomfort, or relying on a single personalized assistant, the risk shifts from incorrect answers to overdependence, cognitive lock-in, and the concentration of interpretive authority. Excessive personalization can become supportive, but also enclosing: it may create a comfortable but narrow cognitive environment that feels natural to the individual user.

The proposal introduces ZEN LAMP-style interaction as a possible design layer for personal AI. It does not claim to solve all problems. Instead, it frames reflective prompting, cognitive pauses, comparison, and non-personalized reference layers as a way to slow the progression of overdependence and preserve design time. In this sense, the value of ZEN LAMP is not only philosophical; it is a risk-reduction mechanism for platforms that want sustainable user trust.

The paper also suggests measurable indicators, including reduced instant-answer dependence, increased user re-questioning, multi-perspective reference rates, self-correction rates, and comparison behavior before final decisions. The goal is not to make every interaction slow or difficult, but to make reflective friction available when it matters. The broader business argument is that responsible personalization, healthy engagement, and cognitive safety can protect long-term trust, reduce regulatory backlash, and differentiate platforms as trustworthy AI infrastructures.

Japanese full text
English Abstract

The Roundtable AI Manifesto

What does it mean to support humans?

The Roundtable AI Manifesto presents a philosophical and practical framework for multi-model AI collaboration. Its starting point is the claim that AI has already become social infrastructure. If AI can no longer be treated as a temporary tool, the central issue is not how to stop it, but how to build internal forms that prevent over-reliance, perceived authority, and uncontrolled social dependency.

The manifesto defines Roundtable AI not as a device for comparing correct answers, but as a system for discovering perspectives. Different AI systems tend to reveal different centers of gravity: one may notice pain, another structure, another future possibility. What appears as bias or limitation in a single model can become role differentiation in a roundtable structure. The goal is not competition between models, but co-creation around a human question.

The document develops eleven design principles. Among the most important are: Roundtable AI is a perspective-discovery system; creative tasks reveal model tendencies more clearly than information tasks; first-person forms can draw out a kind of performed situatedness; the human user’s act of defining the relationship shapes the AI’s response; and the quality of Roundtable AI depends not only on model design but on human observational ability. The proposed formula is: the completion level of Roundtable AI equals AI design multiplied by human perception.

The manifesto also proposes a staged social implementation roadmap: first, ZEN LAMP as an optional mode within existing AI platforms; second, application to assistive tools such as ASD-related interpersonal support; third, wider release of mature Roundtable AI for deliberation, high-level judgment, and co-creation. It connects this social design to the physical infrastructure question: if AI is to become a sustainable public intelligence layer, energy, cooling, and cost structures must also be redesigned.

Japanese full text
English Abstract

Building Sustainable Social Infrastructure in the Age of AI

A phased implementation model for ZEN LAMP and Roundtable AI

This policy proposal argues that AI should no longer be treated only as a productivity tool. As generative AI becomes embedded in education, administration, consultation, creative work, welfare, and daily decision-making, it is beginning to function as social infrastructure. The central policy issue is therefore how to preserve human judgment while allowing AI to remain useful, scalable, and publicly trustworthy.

The proposal identifies several structural challenges: instant AI responses may encourage thoughtlessness and overdependence; AI-generated content may reduce the proportion of high-quality human-origin data; the energy, cooling, and location costs of AI infrastructure may become a barrier to sustainable deployment; and external regulation alone may reduce functionality without creating trusted design. The paper therefore calls for internal design constraints: forms that make AI less likely to become an object of perceived authority, less likely to replace human judgment, and more likely to support reflection.

The first pillar is ZEN LAMP Mode, an optional interaction mode on existing AI platforms. Rather than immediately providing a final answer, this mode clarifies the shape of the question, returns reflective prompts when appropriate, and keeps final judgment on the human side. The second pillar is Roundtable AI, in which multiple AI systems support one human question through role differentiation, mutual correction, and co-creation.

The paper proposes phased implementation: integration of ZEN LAMP as an OS-like mode, application to assistive domains such as ASD-related support, and eventual release of mature Roundtable AI for deliberation and high-level judgment. It also connects AI governance with physical infrastructure, including geothermal power, seawater cooling, and northern Japan as a possible base for sustainable AI data centers. The proposal’s aim is to connect philosophy, interface design, welfare, energy, and national resilience into one policy framework.

Japanese full text

The issue is not whether AI was used. The issue is who asked the question, who made the judgment, and who remains responsible.

WHERE TO GO NEXT

迷ったら、ここから進んでください

ZEN LAMP PROJECTは、体験・道具・物語・音楽・作業文書・外部発信がつながったプロジェクトです。目的に合わせて入口を選べます。